All Collections
User Content
What is OpenSea's policy on hate speech?
What is OpenSea's policy on hate speech?
Updated over a week ago

Inciting Hate Policy

Art is a powerful tool for challenging authority and bringing awareness to social issues. As a platform for artists, we want to provide people with the freedom to express themselves in a way that is thoughtful and provocative. While hurtful, distasteful, or insensitive content has, unfortunately, become common, we believe words and imagery cross the line when they become abusive, threatening, or otherwise incite hate.

Before we get into the policy and our definitions, it’s important to note that OpenSea functions as a blockchain explorer—meaning we aim to display the broadest possible view of NFT content available on the blockchain. While we don't have the ability to “delete” content from the blockchain, when we see content on OpenSea that violates our policies or could do real-world harm, we take the most aggressive action available to us—we delist the content so that it is inaccessible on our site and ban the violating account.

In situations where we believe there is no risk of real-world harm, we may allow that content to be displayed, but freeze it: meaning, we disable buying, selling, and transferring using OpenSea. In some cases, we instead may choose to not charge the OpenSea fee so that we are not monetizing the content.

So how do we apply these policies? There are two parts to our analysis and classification. OpenSea does not allow content that subjects a 1) protected group of people to a 2) designated attack (either via real-world harm or via hateful words and imagery).

Here’s how that policy works in practice:

Protected Groups

  • Race

  • Ethnicity

  • Religion

  • National Origin

  • Nationality

  • Disability

  • Sexual Orientation

  • Gender Identity

  • Caste

To be clear: we won’t delist your collection if it’s simply ragging on a rival sports team or critiquing a country’s foreign policy. But if the content is intended to abuse, threaten, or otherwise incite hate toward a protected group, we then take a look at the second part of our analysis: whether the attack is a “designated attack.”

We define a “designated attack” as words and imagery that go beyond hurtful, distasteful, and insensitive. And at OpenSea, we classify designated attacks in two ways—real-world harm, which leads to full delisting and an account ban, and hateful words and imagery, which leads to disabling buying, selling, and transferring or by not charging the OpenSea fee.

Designated Attack: Real-World Harm

Not all mean comments and content are subject to our policies and enforcement. We understand that overenforcement can have a chilling effect on people’s desire to express themselves in ways that are constructive, even when controversial. That said, we think it’s important to describe the type of content that will lead to delisting.

Examples of designated attacks intended to cause real-world harm that lead to delisting:

  • Celebration or glorification of violence against a protected group

  • Advocating violence against a protected group

  • Denial or delegitimization of a tragedy targeting a protected group

  • Statement that a protected group doesn’t or shouldn’t exist

  • Claims of dominance/superiority or claims of subjugation/inferiority

Designated Attack: Hateful Words and Imagery

We will disable buying and selling of, or not charge the OpenSea fee for, NFTs and collections that contain hateful content such as:

  • Hate symbols

  • Claims that protected groups cannot be trusted or are liars

  • Stereotypes about criminality

  • Stating that the protected group elicits a strong bodily response, such as retching

  • Promotion of or glorification of hateful tropes and depictions

  • Statements of mental and moral deficiency

  • Profane terms directed at protected groups

  • Slurs directed at protected groups

Did this answer your question?