Categories
Left Read This

How to be a Keyboard Warrior for Kamala

I guess it’s time to reveal one of my secret projects this election.

Elise, Shug and I have been working together lately on a project. We’re launching it … this week! Now, even.

This post is trying to do three things:

Before I get into it, I want to say that our projected user might very well not be you! But I still need your help. I bet there are people (or organizations) in your life that would love it. Please help us get it to them.

I remember reading once that Aaron Swartz independently invented wikipedia when he was a literal child. Years before Jimmy Wales. But Aaron was a kid! So his version had articles about magic cards written by his 10-year-old friends. He had great vision for a product, then built it, but his userbase never left his immediate friends. I don’t want to fall into that trap.

And now, drumroll……
—-

INTRODUCING: the Keyboard Warrior’s Guide to Electing Kamala Harris.

https://playbook.forkamala.fyi

Here’s our motivation:

  • Most voters get their news via social media, specifically video, and often first directed by chat threads
  • Those platforms select content based on engagement, not on quality, and definitely not on what will help persuade voters to vote for democrats
  • Normal people can make a big difference in the election by elevating the most persuasive content and arguments

Or, to put it another way: I, like many other americans, spend much of my life online. I bet you do too. It’s where I talk to strangers, family, and friends. It makes sense that online is a place where persuasion has impact, for good or ill.

So we made a playbook for how exactly to do it smarter than “post everything that makes you angry to Facebook/twitter and hope for the best”. It’s a combination of our expertise in integrity work, plus my time in 2020, campaigns, and so on. I think it’s quite goo!

ALSO INTRODUCING: Posting for Kamala — the webapp

Aka the spiritual successor to what the fuck has Obama done so far?

We built a (fun?) companion app full of tiktoks/tweets/articles/content that IS persuasive with swing voters —

  1. For sharing
  2. For inspiration
  3. For specifically inspiring people to make similar videos/tiktoks/tweets/posts

(And there’s more coming!)

Enjoy! But also — please spread

I hope you’ll do two things, please:

If you find this useful yourself — dope! Great! Dive in.

This may not be for you — but I BET you know some people who would love this. Organizations? Resistance facebook groups? Very online moms and dads? Please help me find the people who would love this

…But you don’t have to take my word for it

In the time it took me to write this — we got the Matthew Yglesias endorsement.

We’ve also got big orgs reach out and want to talk about partnering. And smaller grassroots groups telling me they’re already using it. Hooray!

So, uh, things are good!

I can say more about the theoretical underpinnings of all this, if you want. But then again, the guide is pretty long, maybe just read it! And if you have any questions, let me know.

Edit: Now endorsed by Micah Sifry!

Categories
Personal

Housekeeping and transitions

So, I just announced the big news. I’m transitioning out of a formal staff relationship with Integrity Institute in favor of chilling and relaxing as a member.

And, with that, I’ve updated my now page and my then page and my projects page to be actually up-to-date!

Plus I have a ton of stuff I haven’t posted about yet. Did you know I wrote the introduction to a book!? More on that soon.

So, just for recordkeeping, here are some updates.


My big announcement:

Dear friends, members, and colleagues,

I’m writing with some important personal news. After founding and running Integrity Institute since the depths of the pandemic, I’m moving on to both pursue important projects, and also take a deep breath and relax after the nonstop grind of startup life. I’ve achieved the goal every founder should have: this organization can continue to thrive if I choose to step away. I’m excited and even eager to do so, but as you can imagine, this is bittersweet.

Over three years ago, I started calling up people I trusted to pitch them a crazy idea: we should make a think tank powered by integrity workers. Amazingly enough, they liked the idea and wanted to make it with me. Starting with a small team of about a dozen committed volunteers, we’ve attracted new members, funding, attention, and impact. We’ve secured access to, and influence with, people writing public policy around the world, people doing advocacy work, people making decisions in platform companies, academics, and more. We’ve been wildly successful.

Integrity Institute members have helped shape multiple pieces of EU policy, briefed tons of policymakers in legislative, judicial, executive, and independent agency roles, and are in deep conversation with policymakers and advocates around the world. Companies like Pinterest are changing not just their policies, but their design decisions thanks in part to us. Since we’ve started, we’ve seen an explosion of output, visibility, coordination, and confidence from integrity workers. We’ve seen policymakers become much more educated about how it all works. We’ve built a key institution in the space. And we’ve done it together: members, staff, fellows, founding fellows, partners, donors, community leaders. This has been a true team effort.

Throughout this, we’ve also grown. More members, more staff, and more ability to fully become what we set out to be at the beginning. Among them: be a champion for integrity workers, protect people around the world, build a stage for members to stand on, and be the sort of place that I dearly wished existed for me back in the day.

I’m proud that we’ve held consistently to a strategic identity — we’re not advocates; we are scientists. We’re not partisans for anything other than our members, our oath, and our shared diagnosis of how to fix the internet.

It’s been three years of nonstop work, and it’s time for me to go in my own direction. Right now, the most important thing I feel personally called to do is help support US democracy and elections in a way that must be outside Integrity Institute’s methods and positioning.

So! It’s time for me to sit back and enjoy this remarkable community we’ve built – as a member. I’ll also be catching up on my writing, enjoying the ability to meet my neighbors and friends in person, exploring advances in technology I’ve missed these last few years (turns out AI is a thing now!), and being more present offscreen. Plus, of course, meaningfully participating in the US 2024 election cycle.

It’s been fun, and it’s been an honor. I’ll still be around on the Slack, both enjoying the remarkable benefits of II membership and cheering on the staff as they work toward our shared mission.

Please don’t be a stranger. My email is hello@sahar.io. And you can find me on my website (sahar.io) and nascent substack (growth and what comes next), as well as all the sundry social media sites we seem to be on as a matter of course. (My most-used remains Facebook, with Bluesky a clear runner-up). I’d love to stay in touch, and wherever possible be of service to you.

Yours, and forever a champion of our shared integrity Hippocratic oath,

Sahar Massachi

Executive Director and Founder

My new now page

I’ve just announced that I’m leaving Integrity Institute. It’s a big deal! I feel great. To quote myself: “I’ve achieved the goal every founder should have: this organization can continue to thrive if I choose to step away”. So I did! :–)

I’m walking more. Exploring the Brooklyn Botanic garden. Making friends.

Soon I’m going to fly to SF, then Philly, to see old friends.

I’m getting more in touch with being a jew in america and what that is like. Wearing my kippah more often.

The election is coming. I am going to work on it in a way that feels urgent and important and in ways that only I can help. But also, I’m torn because I want to relax. Can I learn to set boundaries and work a job in a “normal” way? By which I mean — letting it be important, but not overwhelm all my other commitments? Being able to sign off at 5pm each day?

Sarah and I are preparing a trip to a bed and breakfast (and shakespeare) we loved last year, and seeing if friends might want to last-minute go with us.

I’m looking for a new DnD group to play with.

I’m playing kickball. Still rock climbing. I miss biking.

Projects

I invite you to join me in these:

First, I’m matchmaking my friends to jobs, housing, and each other. You can sign up for the newsletter here. Please do.

Second, I’m new to Brooklyn / Crown Heights and looking for community. Friendships, but also groups of friends that hang out together.

Third, I’m thinking in public rather more. I’m writing more, and being interviewed by podcasts. Ask me to be a guest on your podcast or publication.

Every day, I try to walk in the Brooklyn Botanic Garden, go bouldering, or hang in Prospect Park.

I’m also delighted to enjoy these:

I miss tabletop roleplaying games. In the past, I ran a Dungeons and Dragons campaign with a few friends, focusing (to the extent that can mean anything in this context) on factions, revolution, and betrayal. Now, I’m looking for a new group.

Sarah and I rock climb all the time. Our favorite place is the Cliffs at Gowanus. Wanna join us?

I have a backlog of dozens of books that I’ve bought, excitedly, but have yet to read. It’s time.

Categories
Misc

My update to the Berkman Community

Hey! I’m a Berkman/RSM fellow this year, and also still an affiliate. They asked for a mid-year check in email to the community. It took a while. I figure, why not share it here? This is a verbatim copy of what I wrote, modolus some formatting differences.

Hey friends! My, how time flies.

As a reminder — I’m Sahar. This is my life story. This is what I’m up to now (in a more personal way). Mainly I run Integrity Institute. I was a fellow in 20-21, an affiliate since, and now I’m also an RSM fellow.

🧱 Work projects and success
  • I’m running Integrity Institute.
    • We are a think tank on how to fix the social internet, powered by our members: tech professionals who have experience on teams like: integrity, trust and safety, anti abuse, responsible design, content or behavior moderation, and so on.
  • We’ve moved from 2 co-founders and 1 staff to 2 co-founders and 5 full-time staff.
  • The recent chatter around “tech exodus” and “how do we integrate these people into civil society” is a thing we pretty much called 2 years ago. Now a big challenge is finding the funders who be delighted to realize we exist and that we are already doing the work they wish was happening. (Do you have advice on that?)
  • People tell me that we’re the luckiest nonprofit in the world and we’re doing great! I guess I have high standards for what we could be doing. It’s an important moment.
  • I like finding ways we can partner. We have in abundance: actual workers who fix social media for a living. First-hand knowledge. We also have in abundance: organizations, governments, journalists trying to talk to us. We do not have in abundance: staff time, general operating support, a moment to breathe.
👋 Personal projects:
  • I’m getting married! 
  • I still run this blog, and I still make mixtapes. I’m behind on mixtapes, I would love your suggestions of great music to put on mixtapes to send to my boo
  • For fun, I still matchmake people in a romance, housing, or jobs way. Feel free to follow along or join.
  • I moved to Brooklyn! I’m in Crown Heights and would like to be part of more local (and niche) communities
🐫 Specific work examples in case you like that sort of thing
💬 Thinking projects

I’m trying to spend time writing and thinking out loud again. Things I’m trying to find the time to finally write:

  • The case for hiring integrity workers (to do integrity work or “normal” product work)
  • This work is not (or should not be) a cost center. (It’s about long-term retention and product quality!)
  • The macroeconomics of social platforms: thinking about supply, demand, and distribution for content
  • Using “integrity thinking” (incentives, supply/demand, etc) to diagnose governance failures in social media
  • More about how the answer is design and behavior moderation. “Content moderation” is a bad metaphor
  • The retweet/share/forward button is bad.
  • How to think about ranking and recommendation systems (“algorithms”). The answer is: 1. this is actually simple. 2. here’s a fun metaphor involving crazed chef robots. 3. Just look at a/b test results
  • Social media companies are actually weak and easily bullied. Even as the platforms they own are powerful and important. This is a bad thing.

It could be fun to take my ideas/bullets that could be blog posts or op-eds, and work with others to turn them into more fleshed out papers or something. Let’s think and write together. I also know that Zahra Stardust and I need to finish cowriting our thing together.

🥰 Hooray for BKC people

I want to shout out the staff and community members of BKC. It’s been delightful spending time with you, including over the last semester.

This includes pretty much all staff at RSM, my fellow fellows at RSM, and the staff at BKC. Mentioning everyone would be a fool’s errand, but some recent connections and shout-outs:

  • I had a lovely time getting to know Biella Coleman at Bruce Schneier’s party a few weeks ago
  • Tom Zick and I met as BKC fellows, stayed friends in Boston, and now I just invited her to my wedding!
  • Rebecca Rinkevitch and Sue Hendrickson and I keep running into each other at conferences! Including one where Micaela Mantegna was there
  • I met, separately, Marissa Gerchick and Joe Bak-Coleman for 1-1 hanging out time in Brooklyn lately, and I hope soon the 3 of us plus Nate Lubin can hang out altogether.
  • Kathryn Hymes met Marissa and I for the best cocktails in brooklyn the other day.
  • Susan Benesch and I had a few deep conversations. And Elodie advised my staff on how to understand the conference landscape.
  • Joanne Cheung and I had a lovely long conversation in an oddly cavernous and loud restaurant at Union Square Manhattan.

If you’ve gotten all the way this far down the email, wow! Hooray. Please accept this cookie. 🍪

Categories
Misc

Integrity Institute at 1

It’s the one-year anniversary since we went public.

To celebrate, we made a tweet thread just listing the bigger/more public stuff we’ve done over the last year. It’s a big list. Kind of crazy to see it all in one place.

Check it out here: https://twitter.com/Integrity_Inst/status/1585301140987469824

Categories
Misc

Questions about “Web3” and “Content Moderation”

I moderated a panel at Unfinished Live a couple weeks ago. The panel was not recorded. The day’s topic was “Web3”, and the panel topic was chosen for me: Content Moderation.

Now, I really don’t like the framing of content moderation. (Or Trust and Safety). Oh well.

Here are the questions I led the session description with:

How might the traditional process of moderating content and behavior online look in a truly decentralized Web3 world? A shared protocol might be harder to edit than a central codebase; distributed data might be harder to change. However, new capabilities (like smart contracts) might improve aspects of moderation work. How might the theory and practice of integrity in Web3 compare to the systems we are accustomed to today?

And here are the (hopefully challenging) advanced questions I tried to ask:

  • One argument is that content moderation is really one manifestation of larger questions: how should platforms or protocols be designed? What are the terms of service, and how are they enforced? In short, these are questions of governance. Do you agree? Do you think narrow questions of enforcing terms of service can be separated from these larger questions?
  • As I see it, when it comes to writing and enforcing terms of service, there are two proposed alternatives to platform *dictatorship*: democratization, and decentralization. On the surface, decentralization and democratization seem opposed: a world where “the users vote to ban nazi content” conflicts with a world where “you can choose to see or not see nazi content as you like”. Must they be opposed? How are they complements vs two opposing visions?
  • One thing I keep coming back to in this work is a chart that Mark Zuckerberg (or his ghostwriter) of all people, put out, back in 2018. It’s a pretty simple chart, and it’s an abstract one: as content gets closer to “policy violating”, engagement goes up. That is, people have a natural tendency to gravitate towards bad things — where bad: could be hateful, misinformation, calls to violence, what have you. Colloquially, you can think about this as during the web1 era of forums: flame wars would get a ton of engagement, almost by definition. The corollary to this insight is that the _design_ of the experience matters a ton. You want care put into creating a system where good behavior is incentivized and bad behavior is not. If we’re focused on a model of either decentralized or democratized content moderation, aren’t we distracted from the real power: the design of the protocol or platform?
  • In thinking through governance, it seems like there’s a question of where legitimacy and values might be “anchored”, as it were. On one hand, it seems like we generally want to respect the laws and judgement of democratic countries. On the other, we want to design platforms that are resistant to surveillance, censorship, and control of unfriendly authoritarian countries. It seems like an impossible design question: make something resilient to bad governments, but accountable to good ones. Is this in fact impossible? Is the answer to somehow categorize laws or countries that are “to be respected” vs those “to be resisted?” To only operate in a few countries? To err more fully on the side of “cyber independence by design” or on the side of “we follow all laws in every country”?

In the end, it was a pretty fun panel. I think we drifted away from “content moderation” straight towards governance (which was supposedly a different panel). Governance being “who decides community standards?”. I think that’s because we all agreed that any work enforcing community standards is downstream of the rules as written, and the resourcing to actually do your job. So that was nice.

Made some friends (I hope!) too.

Categories
Misc

My on-camera debut

A few months ago, a camera team and a few reporters came to my home. They asked me a lot of questions! It took all day. I started out in a sweatshirt — after a few hours, I started sweating. But I had to keep it on, because of visual coherence. It was draining.

It was also scary. Was I saying the right things? Would I say something I regret? How do tell the truth as I see it without accidentally being hyperbolic, or inartful, or something else?

There was more than one reason I was sweating bullets throughout the whole thing.

I did it, though, because the reporting team was filming a documentary about social media, and they specifically wanted to talk to me. I felt like the national conversation was pretty simplistic, on the whole, and perhaps I could do my part in making it more sophisticated.

The show, Fault Lines, is also hosted on Al Jazeera, which I don’t love. (When you watch the show on YouTube, there will be a little disclaimer: “Al Jazeera is funded in whole or in part by the Qatari government”).

My on-camera time ended up being about 1 minute long, making pretty standard points. Something like: “virality is dangerous. You could change social media products to optimize for not just engagement and growth”. I hope the points I made during the other hours of footage helped nudge the overall project in a better direction.

Not sure how to feel, now that it’s over. I guess if nothing else, it was training for next time. Hopefully then it’ll be less scary.

Categories
Misc

The front cover of the alumni magazine

When I was young, I had a peculiar relationship with my college. I loved it in the way that a certain type of american liberal loves their country: it has so much promise, the people are so good, there’s a ton of embedded culture and history here that is amazing. And yet, the people running it keep making terrible choices. Like the church in Dante’s Paradiso, it’s adulterated, corrupted, attacked, compromised — but still divine.

I founded and ran a publication based on that premise, starting my first semester freshman year. That was my biggest, most important, center of my identity.

We had so many adventures. We memorably liveblogged a weird student union judiciary hearing, to the hilarity of the audience and judges. We ran a political party. We helped kick out the president of the school (not the student union, the whole school). I made friends, we had generations of contributors. Alumni of the blog went out to found magazines of their own, or be hotshot national reporters, or do wonderful organizing in cities and rural areas across America.

I loved it. I loved Brandeis so much. (Still do). But it was hard to express, since my commitment to my understanding of Brandeis’ ideals often meant I clashed with the people in charge of running the organization. It didn’t help that I was a teenager. To this day I have regrets about different fights I picked, or positions I took, or things I said.

At the end of senior year, something important happened. The “establishment” (did it even exist?) sent out an olive branch (or was I just overthinking it?). I got the David A. Alexander ’79 Memorial Award for Social Consciousness and Activism. An official object, that was presented me on a stage, for the work that I did.

It was one of the happiest days of my life. It felt like people understood what I was trying to do — love my school, love the people in it, and be driven by that love to try to improve things.

Years later, I became a member of the Louis D. Brandeis Legacy Fund at the university. Again, it felt like my home loved me back.

None of that compares to what happened earlier this month.

Gideon Klionsky posting on my Facebook wall: "The front of the fucking alumni magazine?!"

In October, Laura Gardner, editor of the Brandeis Magazine (and the Executive Director of Strategic Communications) emailed me. She saw the Protocol post announcing the launch of the Integrity Institute and thought it might lead to a great feature story. She connected me with the amazing Julia Klein, and soon we were on the phone (and videochat) talking for hours and hours. We talked about my times at Brandeis, my parents, my life after. We talked about hopes and dreams and fears. How I grew. How I changed. I even learned some family history in the course of fact-checking with my mom.

In December, Mike Lovett, the university photographer, visited my apartment, and we did a photoshoot. It was so fun! He taught me about lighting, and angles, and shared some stories about the other people he photographed in his time. (Pro tip to the Brandeis children — one does NOT wear a hoodie of another college when you show up for a photoshoot for yours. Come on, you know better than that).

Finally, in early March, I got the physical, printed magazine with a little surprise — they made my story the front cover. You can read it here. I’m glad my parents got to see this day.

But also I’m glad for me. I love Brandeis. I miss it. I wish I could go back. It’s nice to see they love me too.

Categories
Misc

I’m a Roddenberry Fellow!

Oops! I realized I forgot to tell you.

So, I had been a little cagey about what I’ve been up to lately, now that my year as a Berkman-Klein Fellow is over (now I’m a Berkman-Klein Affiliate, which is pretty similar, but that’s another story).

So here’s the news! I’m a Roddenberry Fellow. Yes, it’s named after Gene Roddenberry. I have been since January.

Per the website: The fellowship is “awarded to extraordinary leaders and advocates who use new and innovative strategies to safeguard human rights and ensure an equal and just society for all.”

The fellowship is for me to help grow Integrity Institute. So far, I’ve met the other fellows. They are very cool. We did a weeklong online “retreat”. We talked about the politics of star trek. It was pretty nice.

Thank you Russ Finkelstein who pushed me apply, and is in general a wonderful person.

Categories
Misc

“Integrity as city planning” meets actual city planners

This one is fun. This one is really fun.

You may remember that a while ago I published my big piece on Governing the city of atomic supermen in MIT Tech Review. I really liked it, the world seemed to like it, it was a big deal! The central conceit of the piece is that social media is like a new kind of city, and that integrity work is a type of new city planning.

So! There’s a community of people who are obsessed with actual, real, cities. One of them, Jeff Wood of The Overhead Wire, reached out to me, and we had an amazing conversation. Him from the city planner / city advocate world, me from the internet.

You might think that this gimmick would only last for about 20 minutes of conversation, and then we’d run out of things to talk about. That’s reasonable, but it turns out you’re wrong! We just kept talking, and the longer we went, the more interesting it got.

I can’t think of a more fun or more deep podcast episode I’ve done. If you haven’t listened to any yet, this is the one to check out.

LINK

https://usa.streetsblog.org/2022/02/10/talking-headways-podcast-treating-social-media-like-a-city/

We talked about fun new things like:

  • To what extent is social media like the mass adoption of the automobile?
  • Are company growth metrics the analogue of “vehicle miles traveled” goals/grants by the Department of Transportation?
  • Is there a coming collapse of rotten social networks due to all the spam and bots? Is that like climate change?
  • I learned a lot about hot new topics in urbanism! Like the four-step model.
  • Induced demand in freeways as an analogue to bad faith accusations of “censorship” when social media companies try to crack down on abuse.
  • Path dependency is a hell of a drug.
  • Corruption, the history of asphalt, and ethics in social media / city governance. Building code corruption and “lets bend the rules for our large advertisers” corruption.

My quick notes on the conversation:

  • First 14 minutes or so: Intro to me, integrity design, theory of integrity. Mostly stuff you might have heard before elsewhere.
  • Minutes 14 – 23: Do you actually need to bake in integrity design from the beginning? How is growing a social app similar to (or not) growing a city from a village? Online vs in-person social behavior.
  • Minute 19: A lot of the work has shaded into organizational design. What I imagine they teach you in MBA school. How to set up an organization with the right incentives.

The growth of a city is in some sense bounded by the number of homes you can build in a period of time, right? You’re not going to see a club of 15 artists turn into a metropolis of 2 million people in the span of two weeks. It’s just physically impossible to do it. And that gives people some human-scale time to figure out the emerging problems and have some time to experiment with solutions as the city grows. And that’s a sort of growth. That’s a story about the growth of a small platform to a big one, but it’s also the same kind of thing of just how lies are spread, how hate speech is spread — any sort of behavior.

Minute 22
  • Minute 24: Power users of social media. Power users of automobiles. How are they similar and differnet?
  • Minute 30: The reason spam is a solved* problem on email is that the email providers have a sort of beneficient cartel. (Before Evelyn Douek corrects me — “solved” in the sense that we’re not having a panic about how gmail is destroying society, or that outlook’s spam filter isn’t working)
  • Minute 35: Jeff Wood brings up a new metaphor. “20 is plenty” (as a speed limit for cars). How well does it work for online?
  • Minute 40: My pet metaphor for integrity work — platforms are often a gravity well that incentives bad behavior. Doing the wrong thing feels like walking downhill, doing the right thing takes effort.
  • Minute 41-45: Vehicle Miles Traveled, the 4-step model, departments of transportation. Cars and social media and bad metrics. Bad metrics -> bad choices
  • Minute 46 – 51: If at first you don’t do the right thing, then you try to do the right thing, then people will complain. Whether its the suburban sprawl or not cracking down on spammers. They’ll act all righteous and go yell in public meetings. But in the end they did something wrong (in the social media case) or were receiving an unjust subsidy that you’re finally removing (in both cases).
  • Minute 53 – 58: We’ve been talking design here. But let’s not forget actual, literal corruption.
  • Minutes 58 onwards: Ending

These notes don’t do it justice. It was just such a delight. Grateful to Jeff Wood for a great conversation.

Categories
Misc

A right-libertarian take on integrity work

Back in 2020, you might remember that I had yet to commit to integrity work as my big next focus of ideas and identity. What was I focused on instead? Political economy. Specifically, I was in the orbit of the lovely Law and Political Economy project. They’re great, check them out!

You might particularly remember that I went on one of my first ever podcast appearances, with my friend Kevin Wilson, Libertarian. We talked about a right-libertarian case for breaking up Facebook. It was fun!

Well, it’s been over a year since then, and I went back on his show. This time, I talked about Integrity Institute and some of my ideas for libertarian-friendly ways to do integrity work.

The title of the episode is: Can you fix social media by targeting behavior instead of speech? I really liked it. It was fun, nuanced, and far-ranging. We went so over time, that Kevin recorded a full bonus spillover episode going over the “how do you make this beautiful future actually happen”.

I’m told that for some of my biggest fans (aka my parents) this is their favorite podcast I’ve been on. Kevin does a great job asking questions that both give me time to sketch out a full answer, but also push me out of my comfort zone. Give it a listen.

Categories
Misc

Some thoughts on human experience design

There’s an organization, All Tech Is Human. They’re pretty cool! At Integrity Institute, we’re figuring out how to be good organizational friends with them.

They asked me, and a bunch of other people, to answer some questions about technology and society. I like my answers. Here they are! And here’s the link to the full report. (Uploaded to the Internet Archive instead of Scribd — thanks Mek!)

In it, I try to keep the focus on people and power, rather than “tech”. Also, content moderation won’t save us, care must be taken with organizational design, and a cameo by the English Civil War. Plus — never forget Aaron Swartz. Let me know what you think!

Tell us about your current role:

I run the Integrity Institute. We are a think tank powered by a community of integrity professionals: tech workers who have on-platform experience mitigating the harms that can occur on or be caused by the social internet.

We formed the Integrity Institute to advance the theory and practice of protecting the social internet. We believe in a social internet that helps individuals, societies, and democracies thrive.

We know the systemic causes of problems on the social internet and

how to build platforms that mitigate or avoid them. We confronted issues such as misinformation, hate speech, election interference, and many more from the inside. We have seen successful and unsuccessful attempted solutions.

Our community supports the public, policymakers, academics, journalists, and technology companies themselves as they try to understand best practices and solutions to the challenges posed by social media.

In your opinion, what does a healthy relationship with technology look like?

Technology is a funny old word. We’ve been living with technology for thousands of years. Technology isn’t new; only its manifestation is. What did a healthy relationship to technology look like 50 years ago? 200 years ago?

Writing is a form of technology. Companies are a form of technology. Government is a form of technology. They’re all inventions we created to help humankind. They are marvelously constructive tools that unleash a lot of power, and a lot of potential to alleviate human suffering. Yet, in the wrong hands, they can do correspondingly more damage.

Technology should help individuals, societies, and democracy thrive. But it is a truism to say that technology should serve us, not the other way around. So let’s get a little bit more specific.

A healthy relationship to technology looks like a healthy relationship with powerful people. People, after all, own or control technology. Are they using it for social welfare? Are they using it democratically? Are they using it responsibly? Are they increasing human freedom, or diminishing it?

We will always have technology. Machines and humankind have always coexisted. The real danger is in other humans using those machines for evil (or neglect). Let’s not forget.

What individuals are doing inspiring work toward improving our tech future?

If we lived in a better world, Aaron Swartz would no doubt be on top of my list. Never forget.

If one person’s free speech is another’s harm and content moderation can never be perfect, what will it take to optimize human and algorithmic content moderation for tech users as well as policymakers? What steps are needed for optimal content moderation?

Well, first off, let’s not assume that content moderation is the best tool, here. All communications systems, even ones that have no ranking systems or recommendation algorithms, make implicit or explicit choices about affordances. That is, some behavior is rewarded, and some isn’t. Those choices are embedded in code and design. Things like: “How often can you post before it’s considered spam?” or “Can you direct-message people you haven’t met?” or “is there a reshare button?”

Default social platforms have those settings tuned to maximize engagement and growth — at the expense of quality. Sadly, it turns out, content that has high engagement tends to be, well, bad. The builders of those platforms chose to reward the wrong behavior, and so the wrong behavior runs rampant.

Fixing this can be done through technical tweaks. Things like feature limits, dampers to virality, and so on. But companies must set up internal systems so that engineers that make those changes are rewarded, not punished. If the companies that run platforms changed their internal incentive structures, then many of these problems would go away — before any content moderation would be needed.

We’ll always need some content moderators. But they should be a last resort, not a first line of defense.

How can we share information and best practices so that smaller platforms and startups can create ethical and human-centered systems at the design stage?

Thanks for this softball question! I think we’re doing that pretty well over at the Integrity Institute. We are a home for integrity professionals at all companies. Our first, biggest, and forever project has been building the community of people like us. In that community, people can swap tips, help each other learn best practices, and learn in a safe environment.

Drawing from that community, we brief startups, platforms, and other stakeholders on the emerging knowledge coming out of that community. We’re defining a new field, and it’s quite exciting.

Going more abstract, however, I think the problem is also one of defaults and larger systems. How easy is it for a startup to choose ethics over particularly egregious profits? How long will that startup survive (and how long will the CEO stay in charge)? The same goes for larger companies, of course.

Imagine a world where doing the right thing gets your company out-competed, or you personally fired. Pretty bleak, huh?

We’re trying to fix that, in part by enforcing an integrity Hippocratic oath. This would be a professional oath that all integrity workers swear by — to put the public interest first, to tell the truth, and more. But that’s only one small piece of the puzzle.

What makes YOU optimistic that we, as a society, can build a tech future aligned with our human values?

In 1649, the people of England put their king on trial, found him guilty of “unlimited and tyrannical power,” and cut off his head. I imagine this came as quite a shock to him. More interestingly, perhaps, I imagine that it came as a shock to the people themselves.

In extraordinary times, people — human beings — can come together to do things that seemed impossible, unthinkable, even sacrilegious just a few days before.

Within living memory in this country, schoolchildren were drilled to dive under desks due to threats of global nuclear Armageddon. Things must have seemed terrible. Yet, those children grew up, bore children, and made a gamble that the future would indeed be worth passing on to them. I think they were right.

We live in interesting times. That’s not necessarily a great thing: boring, stable, peaceful times have a lot going for them. It doesn’t seem like we have much of a choice, though. In interesting times, conditions can change quickly. Old ideas are shown to be hollow and toothless. Old institutions are exposed as rotten. The new world struggles to be born.

I look around and I see immense possibilities all around me. It could go very badly. We could absolutely come out of this worse than we came in. Anyone — any future — can come out on top. So, why not us? Why not team human?

Categories
Misc

Governing the city of atomic supermen

Social media is a new city, great and terrible. It’s also a dictatorship where all the residents have super powers. People can teleport, fly, churn out convincing android minions, disguise themselves perfectly, and coordinate telepathically.

How do you deal with this? What’s a fair way to govern a place where it’s hard to tell a robot minion from a real person, and people can assume new identities at will?

Thankfully, MIT Tech Review allowed me to ask and answer that question in a fancy publication!

Here’s the full article: How to save our social media by treating it like a city

Thank you to my Berkman fellow friends for helping me edit and polish it. Thank you also to a bunch of other friends (and family) too. It took months, and was a team effort.

Here’s my tweet announcing it:

Some quick points if you’re in a hurry:

  • Social media is like a new kind of city. There are good parts and bad parts. Right now, it’s a city of atomic supermen — people have tons of powers that they don’t really have in the physical world.
  • Our rules, norms, and intuitions right assume that you *can’t*, for example, teleport.
  • Eventually, we’re going to figure out the rules and norms that work really well for that kind of world. For now, we’re mostly stuck with the norms we’ve evolved till today.
  • So let’s change the physics of the city to make the residents a little less superpowered.
  • Make it harder to make fake accounts. Make new accounts have to prove themselves with a “driving test” before they have access to the most abuseable features. Put stringent rate limits on behavior that could be used for evil
  • Notice that none of this involves looking at *content* — if we design our online cities well, with speed bumps and parks and gardens and better physics, we can lessen the need for content moderation. This is the alternative to “censorship”.
  • Much, possibly most, of the integrity problem on platforms is spam of one sort or another. We know how to fight spam.
  • Now to the next point: corporate behavior. You can create an amazing set of rules for your platform. But they amount to less than a hill of beans if you don’t enforce them. And enforcing unevenly is arguably worse than not enforcing at all.
  • If you try to fix your system, perhaps by fixing a bug that allowed spammy behavior — there will be entities that lose. The ones that were benefitting from the loophole. Don’t let them stop you by loudly complaining — otherwise you can never fix things!
  • And now to the biggest point: listen to integrity workers. My coworkers and I had actual jobs where we tried to fix the problem. We are steeped in this. We know tons of possible solutions. We study details of how to fix it. We don’t always win internal battles, of course.
  • But we exist. Talk to us. Other integrity workers have their own frameworks that are equally or more insightful. They’re wonderful people. Help us — help them — do their jobs and win the arguments inside companies.
  • PS — Join the Integrity Institute.

Categories
Misc

On the Tech Policy Press podcast

I forgot to mention this a while ago: Jeff and I were on a second fancy podcast when we launched. This time — Tech Policy Press with Justin Hendrix.

It was fun! Justin really understands these issues and asks good questions.

Plus, as a bonus, Aviv was brought on for part two. Worlds collide.

Categories
Misc

I’ll be on a panel in NYU on Dec 15th

Update: It went great! Here’s the recap link to watch it and get a summary

Here’s what the recap said about my part:

As a former Facebook employee, Sahar Massachi stressed how the organizational dynamics inside social media companies influence their products. For example, to increase profit, Facebook optimizes for metrics like growth and engagement, which often tend to fuel harmful content. Although platforms have integrity workers to help mitigate these harms, the focus on engagement often undercuts their efforts. Only by changing the incentives, he said, can we change how social media companies approach harm on their platforms. Massachi co-founded the Integrity Institute to build a community of integrity workers to support the public, policymakers, academics, journalists, and social media companies themselves as they try to solve the problems posed by social media.


So, as part of my work with the Integrity Institute, I get to be on a fancy panel.

Wanna come?

Here are the details, copied from the website:

Reducing Harm on Social Media: Research & Design Ideas

Wednesday, December 15, 2021  |  3:00 – 4:15pm ET

When social media platforms first launched nearly two decades ago, they were seen as a force for good – a way to connect with family and friends, learn and explore new ideas, and engage with social and political movements. Yet, as the Facebook Papers and other research have documented, these same platforms have become vectors of misinformation, hate speech, and polarization.

With attention around social media’s impact on society at an all-time high, this event gathers researchers and practitioners from across the academic, policy, and tech communities to discuss various approaches and interventions to make social media a safer and more civil place.

Panelists

  • Jane Lytvynenko, Senior Research Fellow, Technology and Social Change Project, Harvard Kennedy School’s Shorenstein Center on Media, Politics and Public Policy (moderator)
  • Niousha Roshani, Deputy Director, Content Policy & Society Lab, Stanford University’s Program on Democracy and the Internet
  • Rebekah Tromble, Director, Institute for Data, Democracy & Politics, George Washington University
  • Joshua A. Tucker, Co-Director, New York University’s Center for Social Media and Politics
  • Sahar Massachi, Co-Founder and Executive Director, Integrity Institute
Categories
Misc

I’m on the Lawfare Podcast

As part of the Integrity Institute rollout, Jeff and I were on the Lawfare podcast with Evelyn Douek and Quinta Jurecic. It actually turned out really good!

The editing was polished and lightweight enough that you can’t really tell that it was edited, but also thorough enough that we come across as crisper than we are in real life.

And we talked for an hour! I think it’s a good overview of what we’re thinking right now and how we see the world. Check it out, I’m proud of it.

https://www.lawfareblog.com/lawfare-podcast-what-integrity-social-media