Categories
Misc

Questions about “Web3” and “Content Moderation”

I moderated a panel at Unfinished Live a couple weeks ago. The panel was not recorded. The day’s topic was “Web3”, and the panel topic was chosen for me: Content Moderation.

Now, I really don’t like the framing of content moderation. (Or Trust and Safety). Oh well.

Here are the questions I led the session description with:

How might the traditional process of moderating content and behavior online look in a truly decentralized Web3 world? A shared protocol might be harder to edit than a central codebase; distributed data might be harder to change. However, new capabilities (like smart contracts) might improve aspects of moderation work. How might the theory and practice of integrity in Web3 compare to the systems we are accustomed to today?

And here are the (hopefully challenging) advanced questions I tried to ask:

  • One argument is that content moderation is really one manifestation of larger questions: how should platforms or protocols be designed? What are the terms of service, and how are they enforced? In short, these are questions of governance. Do you agree? Do you think narrow questions of enforcing terms of service can be separated from these larger questions?
  • As I see it, when it comes to writing and enforcing terms of service, there are two proposed alternatives to platform *dictatorship*: democratization, and decentralization. On the surface, decentralization and democratization seem opposed: a world where “the users vote to ban nazi content” conflicts with a world where “you can choose to see or not see nazi content as you like”. Must they be opposed? How are they complements vs two opposing visions?
  • One thing I keep coming back to in this work is a chart that Mark Zuckerberg (or his ghostwriter) of all people, put out, back in 2018. It’s a pretty simple chart, and it’s an abstract one: as content gets closer to “policy violating”, engagement goes up. That is, people have a natural tendency to gravitate towards bad things — where bad: could be hateful, misinformation, calls to violence, what have you. Colloquially, you can think about this as during the web1 era of forums: flame wars would get a ton of engagement, almost by definition. The corollary to this insight is that the _design_ of the experience matters a ton. You want care put into creating a system where good behavior is incentivized and bad behavior is not. If we’re focused on a model of either decentralized or democratized content moderation, aren’t we distracted from the real power: the design of the protocol or platform?
  • In thinking through governance, it seems like there’s a question of where legitimacy and values might be “anchored”, as it were. On one hand, it seems like we generally want to respect the laws and judgement of democratic countries. On the other, we want to design platforms that are resistant to surveillance, censorship, and control of unfriendly authoritarian countries. It seems like an impossible design question: make something resilient to bad governments, but accountable to good ones. Is this in fact impossible? Is the answer to somehow categorize laws or countries that are “to be respected” vs those “to be resisted?” To only operate in a few countries? To err more fully on the side of “cyber independence by design” or on the side of “we follow all laws in every country”?

In the end, it was a pretty fun panel. I think we drifted away from “content moderation” straight towards governance (which was supposedly a different panel). Governance being “who decides community standards?”. I think that’s because we all agreed that any work enforcing community standards is downstream of the rules as written, and the resourcing to actually do your job. So that was nice.

Made some friends (I hope!) too.

Leave a Reply

Your email address will not be published. Required fields are marked *