Social media is a new city, great and terrible. It’s also a dictatorship where all the residents have super powers. People can teleport, fly, churn out convincing android minions, disguise themselves perfectly, and coordinate telepathically.
How do you deal with this? What’s a fair way to govern a place where it’s hard to tell a robot minion from a real person, and people can assume new identities at will?
Thankfully, MIT Tech Review allowed me to ask and answer that question in a fancy publication!
Thank you to my Berkman fellow friends for helping me edit and polish it. Thank you also to a bunch of other friends (and family) too. It took months, and was a team effort.
Here’s my tweet announcing it:
Some quick points if you’re in a hurry:
- Social media is like a new kind of city. There are good parts and bad parts. Right now, it’s a city of atomic supermen — people have tons of powers that they don’t really have in the physical world.
- Our rules, norms, and intuitions right assume that you *can’t*, for example, teleport.
- Eventually, we’re going to figure out the rules and norms that work really well for that kind of world. For now, we’re mostly stuck with the norms we’ve evolved till today.
- So let’s change the physics of the city to make the residents a little less superpowered.
- Make it harder to make fake accounts. Make new accounts have to prove themselves with a “driving test” before they have access to the most abuseable features. Put stringent rate limits on behavior that could be used for evil
- Notice that none of this involves looking at *content* — if we design our online cities well, with speed bumps and parks and gardens and better physics, we can lessen the need for content moderation. This is the alternative to “censorship”.
- Much, possibly most, of the integrity problem on platforms is spam of one sort or another. We know how to fight spam.
- Now to the next point: corporate behavior. You can create an amazing set of rules for your platform. But they amount to less than a hill of beans if you don’t enforce them. And enforcing unevenly is arguably worse than not enforcing at all.
- If you try to fix your system, perhaps by fixing a bug that allowed spammy behavior — there will be entities that lose. The ones that were benefitting from the loophole. Don’t let them stop you by loudly complaining — otherwise you can never fix things!
- And now to the biggest point: listen to integrity workers. My coworkers and I had actual jobs where we tried to fix the problem. We are steeped in this. We know tons of possible solutions. We study details of how to fix it. We don’t always win internal battles, of course.
- But we exist. Talk to us. Other integrity workers have their own frameworks that are equally or more insightful. They’re wonderful people. Help us — help them — do their jobs and win the arguments inside companies.
- PS — Join the Integrity Institute.