Categories
Misc

My update to the Berkman Community

Hey! I’m a Berkman/RSM fellow this year, and also still an affiliate. They asked for a mid-year check in email to the community. It took a while. I figure, why not share it here? This is a verbatim copy of what I wrote, modolus some formatting differences.

Hey friends! My, how time flies.

As a reminder — I’m Sahar. This is my life story. This is what I’m up to now (in a more personal way). Mainly I run Integrity Institute. I was a fellow in 20-21, an affiliate since, and now I’m also an RSM fellow.

đŸ§± Work projects and success
  • I’m running Integrity Institute.
    • We are a think tank on how to fix the social internet, powered by our members: tech professionals who have experience on teams like: integrity, trust and safety, anti abuse, responsible design, content or behavior moderation, and so on.
  • We’ve moved from 2 co-founders and 1 staff to 2 co-founders and 5 full-time staff.
  • The recent chatter around “tech exodus” and “how do we integrate these people into civil society” is a thing we pretty much called 2 years ago. Now a big challenge is finding the funders who be delighted to realize we exist and that we are already doing the work they wish was happening. (Do you have advice on that?)
  • People tell me that we’re the luckiest nonprofit in the world and we’re doing great! I guess I have high standards for what we could be doing. It’s an important moment.
  • I like finding ways we can partner. We have in abundance: actual workers who fix social media for a living. First-hand knowledge. We also have in abundance: organizations, governments, journalists trying to talk to us. We do not have in abundance: staff time, general operating support, a moment to breathe.
👋 Personal projects:
  • I’m getting married! 
  • I still run this blog, and I still make mixtapes. I’m behind on mixtapes, I would love your suggestions of great music to put on mixtapes to send to my boo
  • For fun, I still matchmake people in a romance, housing, or jobs way. Feel free to follow along or join.
  • I moved to Brooklyn! I’m in Crown Heights and would like to be part of more local (and niche) communities
đŸ« Specific work examples in case you like that sort of thing
💬 Thinking projects

I’m trying to spend time writing and thinking out loud again. Things I’m trying to find the time to finally write:

  • The case for hiring integrity workers (to do integrity work or “normal” product work)
  • This work is not (or should not be) a cost center. (It’s about long-term retention and product quality!)
  • The macroeconomics of social platforms: thinking about supply, demand, and distribution for content
  • Using “integrity thinking” (incentives, supply/demand, etc) to diagnose governance failures in social media
  • More about how the answer is design and behavior moderation. “Content moderation” is a bad metaphor
  • The retweet/share/forward button is bad.
  • How to think about ranking and recommendation systems (“algorithms”). The answer is: 1. this is actually simple. 2. here’s a fun metaphor involving crazed chef robots. 3. Just look at a/b test results
  • Social media companies are actually weak and easily bullied. Even as the platforms they own are powerful and important. This is a bad thing.

It could be fun to take my ideas/bullets that could be blog posts or op-eds, and work with others to turn them into more fleshed out papers or something. Let’s think and write together. I also know that Zahra Stardust and I need to finish cowriting our thing together.

đŸ„° Hooray for BKC people

I want to shout out the staff and community members of BKC. It’s been delightful spending time with you, including over the last semester.

This includes pretty much all staff at RSM, my fellow fellows at RSM, and the staff at BKC. Mentioning everyone would be a fool’s errand, but some recent connections and shout-outs:

  • I had a lovely time getting to know Biella Coleman at Bruce Schneier’s party a few weeks ago
  • Tom Zick and I met as BKC fellows, stayed friends in Boston, and now I just invited her to my wedding!
  • Rebecca Rinkevitch and Sue Hendrickson and I keep running into each other at conferences! Including one where Micaela Mantegna was there
  • I met, separately, Marissa Gerchick and Joe Bak-Coleman for 1-1 hanging out time in Brooklyn lately, and I hope soon the 3 of us plus Nate Lubin can hang out altogether.
  • Kathryn Hymes met Marissa and I for the best cocktails in brooklyn the other day.
  • Susan Benesch and I had a few deep conversations. And Elodie advised my staff on how to understand the conference landscape.
  • Joanne Cheung and I had a lovely long conversation in an oddly cavernous and loud restaurant at Union Square Manhattan.

If you’ve gotten all the way this far down the email, wow! Hooray. Please accept this cookie. đŸȘ

Categories
Misc

Integrity Institute at 1

It’s the one-year anniversary since we went public.

To celebrate, we made a tweet thread just listing the bigger/more public stuff we’ve done over the last year. It’s a big list. Kind of crazy to see it all in one place.

Check it out here: https://twitter.com/Integrity_Inst/status/1585301140987469824

Categories
Misc

Questions about “Web3” and “Content Moderation”

I moderated a panel at Unfinished Live a couple weeks ago. The panel was not recorded. The day’s topic was “Web3”, and the panel topic was chosen for me: Content Moderation.

Now, I really don’t like the framing of content moderation. (Or Trust and Safety). Oh well.

Here are the questions I led the session description with:

How might the traditional process of moderating content and behavior online look in a truly decentralized Web3 world? A shared protocol might be harder to edit than a central codebase; distributed data might be harder to change. However, new capabilities (like smart contracts) might improve aspects of moderation work. How might the theory and practice of integrity in Web3 compare to the systems we are accustomed to today?

And here are the (hopefully challenging) advanced questions I tried to ask:

  • One argument is that content moderation is really one manifestation of larger questions: how should platforms or protocols be designed? What are the terms of service, and how are they enforced? In short, these are questions of governance. Do you agree? Do you think narrow questions of enforcing terms of service can be separated from these larger questions?
  • As I see it, when it comes to writing and enforcing terms of service, there are two proposed alternatives to platform *dictatorship*: democratization, and decentralization. On the surface, decentralization and democratization seem opposed: a world where “the users vote to ban nazi content” conflicts with a world where “you can choose to see or not see nazi content as you like”. Must they be opposed? How are they complements vs two opposing visions?
  • One thing I keep coming back to in this work is a chart that Mark Zuckerberg (or his ghostwriter) of all people, put out, back in 2018. It’s a pretty simple chart, and it’s an abstract one: as content gets closer to “policy violating”, engagement goes up. That is, people have a natural tendency to gravitate towards bad things — where bad: could be hateful, misinformation, calls to violence, what have you. Colloquially, you can think about this as during the web1 era of forums: flame wars would get a ton of engagement, almost by definition. The corollary to this insight is that the _design_ of the experience matters a ton. You want care put into creating a system where good behavior is incentivized and bad behavior is not. If we’re focused on a model of either decentralized or democratized content moderation, aren’t we distracted from the real power: the design of the protocol or platform?
  • In thinking through governance, it seems like there’s a question of where legitimacy and values might be “anchored”, as it were. On one hand, it seems like we generally want to respect the laws and judgement of democratic countries. On the other, we want to design platforms that are resistant to surveillance, censorship, and control of unfriendly authoritarian countries. It seems like an impossible design question: make something resilient to bad governments, but accountable to good ones. Is this in fact impossible? Is the answer to somehow categorize laws or countries that are “to be respected” vs those “to be resisted?” To only operate in a few countries? To err more fully on the side of “cyber independence by design” or on the side of “we follow all laws in every country”?

In the end, it was a pretty fun panel. I think we drifted away from “content moderation” straight towards governance (which was supposedly a different panel). Governance being “who decides community standards?”. I think that’s because we all agreed that any work enforcing community standards is downstream of the rules as written, and the resourcing to actually do your job. So that was nice.

Made some friends (I hope!) too.

Categories
Misc

My on-camera debut

A few months ago, a camera team and a few reporters came to my home. They asked me a lot of questions! It took all day. I started out in a sweatshirt — after a few hours, I started sweating. But I had to keep it on, because of visual coherence. It was draining.

It was also scary. Was I saying the right things? Would I say something I regret? How do tell the truth as I see it without accidentally being hyperbolic, or inartful, or something else?

There was more than one reason I was sweating bullets throughout the whole thing.

I did it, though, because the reporting team was filming a documentary about social media, and they specifically wanted to talk to me. I felt like the national conversation was pretty simplistic, on the whole, and perhaps I could do my part in making it more sophisticated.

The show, Fault Lines, is also hosted on Al Jazeera, which I don’t love. (When you watch the show on YouTube, there will be a little disclaimer: “Al Jazeera is funded in whole or in part by the Qatari government”).

My on-camera time ended up being about 1 minute long, making pretty standard points. Something like: “virality is dangerous. You could change social media products to optimize for not just engagement and growth”. I hope the points I made during the other hours of footage helped nudge the overall project in a better direction.

Not sure how to feel, now that it’s over. I guess if nothing else, it was training for next time. Hopefully then it’ll be less scary.

Categories
Misc

The front cover of the alumni magazine

When I was young, I had a peculiar relationship with my college. I loved it in the way that a certain type of american liberal loves their country: it has so much promise, the people are so good, there’s a ton of embedded culture and history here that is amazing. And yet, the people running it keep making terrible choices. Like the church in Dante’s Paradiso, it’s adulterated, corrupted, attacked, compromised — but still divine.

I founded and ran a publication based on that premise, starting my first semester freshman year. That was my biggest, most important, center of my identity.

We had so many adventures. We memorably liveblogged a weird student union judiciary hearing, to the hilarity of the audience and judges. We ran a political party. We helped kick out the president of the school (not the student union, the whole school). I made friends, we had generations of contributors. Alumni of the blog went out to found magazines of their own, or be hotshot national reporters, or do wonderful organizing in cities and rural areas across America.

I loved it. I loved Brandeis so much. (Still do). But it was hard to express, since my commitment to my understanding of Brandeis’ ideals often meant I clashed with the people in charge of running the organization. It didn’t help that I was a teenager. To this day I have regrets about different fights I picked, or positions I took, or things I said.

At the end of senior year, something important happened. The “establishment” (did it even exist?) sent out an olive branch (or was I just overthinking it?). I got the David A. Alexander ’79 Memorial Award for Social Consciousness and Activism. An official object, that was presented me on a stage, for the work that I did.

It was one of the happiest days of my life. It felt like people understood what I was trying to do — love my school, love the people in it, and be driven by that love to try to improve things.

Years later, I became a member of the Louis D. Brandeis Legacy Fund at the university. Again, it felt like my home loved me back.

None of that compares to what happened earlier this month.

Gideon Klionsky posting on my Facebook wall: "The front of the fucking alumni magazine?!"

In October, Laura Gardner, editor of the Brandeis Magazine (and the Executive Director of Strategic Communications) emailed me. She saw the Protocol post announcing the launch of the Integrity Institute and thought it might lead to a great feature story. She connected me with the amazing Julia Klein, and soon we were on the phone (and videochat) talking for hours and hours. We talked about my times at Brandeis, my parents, my life after. We talked about hopes and dreams and fears. How I grew. How I changed. I even learned some family history in the course of fact-checking with my mom.

In December, Mike Lovett, the university photographer, visited my apartment, and we did a photoshoot. It was so fun! He taught me about lighting, and angles, and shared some stories about the other people he photographed in his time. (Pro tip to the Brandeis children — one does NOT wear a hoodie of another college when you show up for a photoshoot for yours. Come on, you know better than that).

Finally, in early March, I got the physical, printed magazine with a little surprise — they made my story the front cover. You can read it here. I’m glad my parents got to see this day.

But also I’m glad for me. I love Brandeis. I miss it. I wish I could go back. It’s nice to see they love me too.

Categories
Misc

I’m a Roddenberry Fellow!

Oops! I realized I forgot to tell you.

So, I had been a little cagey about what I’ve been up to lately, now that my year as a Berkman-Klein Fellow is over (now I’m a Berkman-Klein Affiliate, which is pretty similar, but that’s another story).

So here’s the news! I’m a Roddenberry Fellow. Yes, it’s named after Gene Roddenberry. I have been since January.

Per the website: The fellowship is “awarded to extraordinary leaders and advocates who use new and innovative strategies to safeguard human rights and ensure an equal and just society for all.”

The fellowship is for me to help grow Integrity Institute. So far, I’ve met the other fellows. They are very cool. We did a weeklong online “retreat”. We talked about the politics of star trek. It was pretty nice.

Thank you Russ Finkelstein who pushed me apply, and is in general a wonderful person.

Categories
Misc

“Integrity as city planning” meets actual city planners

This one is fun. This one is really fun.

You may remember that a while ago I published my big piece on Governing the city of atomic supermen in MIT Tech Review. I really liked it, the world seemed to like it, it was a big deal! The central conceit of the piece is that social media is like a new kind of city, and that integrity work is a type of new city planning.

So! There’s a community of people who are obsessed with actual, real, cities. One of them, Jeff Wood of The Overhead Wire, reached out to me, and we had an amazing conversation. Him from the city planner / city advocate world, me from the internet.

You might think that this gimmick would only last for about 20 minutes of conversation, and then we’d run out of things to talk about. That’s reasonable, but it turns out you’re wrong! We just kept talking, and the longer we went, the more interesting it got.

I can’t think of a more fun or more deep podcast episode I’ve done. If you haven’t listened to any yet, this is the one to check out.

LINK

https://usa.streetsblog.org/2022/02/10/talking-headways-podcast-treating-social-media-like-a-city/

We talked about fun new things like:

  • To what extent is social media like the mass adoption of the automobile?
  • Are company growth metrics the analogue of “vehicle miles traveled” goals/grants by the Department of Transportation?
  • Is there a coming collapse of rotten social networks due to all the spam and bots? Is that like climate change?
  • I learned a lot about hot new topics in urbanism! Like the four-step model.
  • Induced demand in freeways as an analogue to bad faith accusations of “censorship” when social media companies try to crack down on abuse.
  • Path dependency is a hell of a drug.
  • Corruption, the history of asphalt, and ethics in social media / city governance. Building code corruption and “lets bend the rules for our large advertisers” corruption.

My quick notes on the conversation:

  • First 14 minutes or so: Intro to me, integrity design, theory of integrity. Mostly stuff you might have heard before elsewhere.
  • Minutes 14 – 23: Do you actually need to bake in integrity design from the beginning? How is growing a social app similar to (or not) growing a city from a village? Online vs in-person social behavior.
  • Minute 19: A lot of the work has shaded into organizational design. What I imagine they teach you in MBA school. How to set up an organization with the right incentives.

The growth of a city is in some sense bounded by the number of homes you can build in a period of time, right? You’re not going to see a club of 15 artists turn into a metropolis of 2 million people in the span of two weeks. It’s just physically impossible to do it. And that gives people some human-scale time to figure out the emerging problems and have some time to experiment with solutions as the city grows. And that’s a sort of growth. That’s a story about the growth of a small platform to a big one, but it’s also the same kind of thing of just how lies are spread, how hate speech is spread — any sort of behavior.

Minute 22
  • Minute 24: Power users of social media. Power users of automobiles. How are they similar and differnet?
  • Minute 30: The reason spam is a solved* problem on email is that the email providers have a sort of beneficient cartel. (Before Evelyn Douek corrects me — “solved” in the sense that we’re not having a panic about how gmail is destroying society, or that outlook’s spam filter isn’t working)
  • Minute 35: Jeff Wood brings up a new metaphor. “20 is plenty” (as a speed limit for cars). How well does it work for online?
  • Minute 40: My pet metaphor for integrity work — platforms are often a gravity well that incentives bad behavior. Doing the wrong thing feels like walking downhill, doing the right thing takes effort.
  • Minute 41-45: Vehicle Miles Traveled, the 4-step model, departments of transportation. Cars and social media and bad metrics. Bad metrics -> bad choices
  • Minute 46 – 51: If at first you don’t do the right thing, then you try to do the right thing, then people will complain. Whether its the suburban sprawl or not cracking down on spammers. They’ll act all righteous and go yell in public meetings. But in the end they did something wrong (in the social media case) or were receiving an unjust subsidy that you’re finally removing (in both cases).
  • Minute 53 – 58: We’ve been talking design here. But let’s not forget actual, literal corruption.
  • Minutes 58 onwards: Ending

These notes don’t do it justice. It was just such a delight. Grateful to Jeff Wood for a great conversation.

Categories
Misc

A right-libertarian take on integrity work

Back in 2020, you might remember that I had yet to commit to integrity work as my big next focus of ideas and identity. What was I focused on instead? Political economy. Specifically, I was in the orbit of the lovely Law and Political Economy project. They’re great, check them out!

You might particularly remember that I went on one of my first ever podcast appearances, with my friend Kevin Wilson, Libertarian. We talked about a right-libertarian case for breaking up Facebook. It was fun!

Well, it’s been over a year since then, and I went back on his show. This time, I talked about Integrity Institute and some of my ideas for libertarian-friendly ways to do integrity work.

The title of the episode is: Can you fix social media by targeting behavior instead of speech? I really liked it. It was fun, nuanced, and far-ranging. We went so over time, that Kevin recorded a full bonus spillover episode going over the “how do you make this beautiful future actually happen”.

I’m told that for some of my biggest fans (aka my parents) this is their favorite podcast I’ve been on. Kevin does a great job asking questions that both give me time to sketch out a full answer, but also push me out of my comfort zone. Give it a listen.

Categories
Misc

Some thoughts on human experience design

There’s an organization, All Tech Is Human. They’re pretty cool! At Integrity Institute, we’re figuring out how to be good organizational friends with them.

They asked me, and a bunch of other people, to answer some questions about technology and society. I like my answers. Here they are! And here’s the link to the full report. (Uploaded to the Internet Archive instead of Scribd — thanks Mek!)

In it, I try to keep the focus on people and power, rather than “tech”. Also, content moderation won’t save us, care must be taken with organizational design, and a cameo by the English Civil War. Plus — never forget Aaron Swartz. Let me know what you think!

Tell us about your current role:

I run the Integrity Institute. We are a think tank powered by a community of integrity professionals: tech workers who have on-platform experience mitigating the harms that can occur on or be caused by the social internet.

We formed the Integrity Institute to advance the theory and practice of protecting the social internet. We believe in a social internet that helps individuals, societies, and democracies thrive.

We know the systemic causes of problems on the social internet and

how to build platforms that mitigate or avoid them. We confronted issues such as misinformation, hate speech, election interference, and many more from the inside. We have seen successful and unsuccessful attempted solutions.

Our community supports the public, policymakers, academics, journalists, and technology companies themselves as they try to understand best practices and solutions to the challenges posed by social media.

In your opinion, what does a healthy relationship with technology look like?

Technology is a funny old word. We’ve been living with technology for thousands of years. Technology isn’t new; only its manifestation is. What did a healthy relationship to technology look like 50 years ago? 200 years ago?

Writing is a form of technology. Companies are a form of technology. Government is a form of technology. They’re all inventions we created to help humankind. They are marvelously constructive tools that unleash a lot of power, and a lot of potential to alleviate human suffering. Yet, in the wrong hands, they can do correspondingly more damage.

Technology should help individuals, societies, and democracy thrive. But it is a truism to say that technology should serve us, not the other way around. So let’s get a little bit more specific.

A healthy relationship to technology looks like a healthy relationship with powerful people. People, after all, own or control technology. Are they using it for social welfare? Are they using it democratically? Are they using it responsibly? Are they increasing human freedom, or diminishing it?

We will always have technology. Machines and humankind have always coexisted. The real danger is in other humans using those machines for evil (or neglect). Let’s not forget.

What individuals are doing inspiring work toward improving our tech future?

If we lived in a better world, Aaron Swartz would no doubt be on top of my list. Never forget.

If one person’s free speech is another’s harm and content moderation can never be perfect, what will it take to optimize human and algorithmic content moderation for tech users as well as policymakers? What steps are needed for optimal content moderation?

Well, first off, let’s not assume that content moderation is the best tool, here. All communications systems, even ones that have no ranking systems or recommendation algorithms, make implicit or explicit choices about affordances. That is, some behavior is rewarded, and some isn’t. Those choices are embedded in code and design. Things like: “How often can you post before it’s considered spam?” or “Can you direct-message people you haven’t met?” or “is there a reshare button?”

Default social platforms have those settings tuned to maximize engagement and growth — at the expense of quality. Sadly, it turns out, content that has high engagement tends to be, well, bad. The builders of those platforms chose to reward the wrong behavior, and so the wrong behavior runs rampant.

Fixing this can be done through technical tweaks. Things like feature limits, dampers to virality, and so on. But companies must set up internal systems so that engineers that make those changes are rewarded, not punished. If the companies that run platforms changed their internal incentive structures, then many of these problems would go away — before any content moderation would be needed.

We’ll always need some content moderators. But they should be a last resort, not a first line of defense.

How can we share information and best practices so that smaller platforms and startups can create ethical and human-centered systems at the design stage?

Thanks for this softball question! I think we’re doing that pretty well over at the Integrity Institute. We are a home for integrity professionals at all companies. Our first, biggest, and forever project has been building the community of people like us. In that community, people can swap tips, help each other learn best practices, and learn in a safe environment.

Drawing from that community, we brief startups, platforms, and other stakeholders on the emerging knowledge coming out of that community. We’re defining a new field, and it’s quite exciting.

Going more abstract, however, I think the problem is also one of defaults and larger systems. How easy is it for a startup to choose ethics over particularly egregious profits? How long will that startup survive (and how long will the CEO stay in charge)? The same goes for larger companies, of course.

Imagine a world where doing the right thing gets your company out-competed, or you personally fired. Pretty bleak, huh?

We’re trying to fix that, in part by enforcing an integrity Hippocratic oath. This would be a professional oath that all integrity workers swear by — to put the public interest first, to tell the truth, and more. But that’s only one small piece of the puzzle.

What makes YOU optimistic that we, as a society, can build a tech future aligned with our human values?

In 1649, the people of England put their king on trial, found him guilty of “unlimited and tyrannical power,” and cut off his head. I imagine this came as quite a shock to him. More interestingly, perhaps, I imagine that it came as a shock to the people themselves.

In extraordinary times, people — human beings — can come together to do things that seemed impossible, unthinkable, even sacrilegious just a few days before.

Within living memory in this country, schoolchildren were drilled to dive under desks due to threats of global nuclear Armageddon. Things must have seemed terrible. Yet, those children grew up, bore children, and made a gamble that the future would indeed be worth passing on to them. I think they were right.

We live in interesting times. That’s not necessarily a great thing: boring, stable, peaceful times have a lot going for them. It doesn’t seem like we have much of a choice, though. In interesting times, conditions can change quickly. Old ideas are shown to be hollow and toothless. Old institutions are exposed as rotten. The new world struggles to be born.

I look around and I see immense possibilities all around me. It could go very badly. We could absolutely come out of this worse than we came in. Anyone — any future — can come out on top. So, why not us? Why not team human?

Categories
Misc

Governing the city of atomic supermen

Social media is a new city, great and terrible. It’s also a dictatorship where all the residents have super powers. People can teleport, fly, churn out convincing android minions, disguise themselves perfectly, and coordinate telepathically.

How do you deal with this? What’s a fair way to govern a place where it’s hard to tell a robot minion from a real person, and people can assume new identities at will?

Thankfully, MIT Tech Review allowed me to ask and answer that question in a fancy publication!

Here’s the full article: How to save our social media by treating it like a city

Thank you to my Berkman fellow friends for helping me edit and polish it. Thank you also to a bunch of other friends (and family) too. It took months, and was a team effort.

Here’s my tweet announcing it:

Some quick points if you’re in a hurry:

  • Social media is like a new kind of city. There are good parts and bad parts. Right now, it’s a city of atomic supermen — people have tons of powers that they don’t really have in the physical world.
  • Our rules, norms, and intuitions right assume that you *can’t*, for example, teleport.
  • Eventually, we’re going to figure out the rules and norms that work really well for that kind of world. For now, we’re mostly stuck with the norms we’ve evolved till today.
  • So let’s change the physics of the city to make the residents a little less superpowered.
  • Make it harder to make fake accounts. Make new accounts have to prove themselves with a “driving test” before they have access to the most abuseable features. Put stringent rate limits on behavior that could be used for evil
  • Notice that none of this involves looking at *content* — if we design our online cities well, with speed bumps and parks and gardens and better physics, we can lessen the need for content moderation. This is the alternative to “censorship”.
  • Much, possibly most, of the integrity problem on platforms is spam of one sort or another. We know how to fight spam.
  • Now to the next point: corporate behavior. You can create an amazing set of rules for your platform. But they amount to less than a hill of beans if you don’t enforce them. And enforcing unevenly is arguably worse than not enforcing at all.
  • If you try to fix your system, perhaps by fixing a bug that allowed spammy behavior — there will be entities that lose. The ones that were benefitting from the loophole. Don’t let them stop you by loudly complaining — otherwise you can never fix things!
  • And now to the biggest point: listen to integrity workers. My coworkers and I had actual jobs where we tried to fix the problem. We are steeped in this. We know tons of possible solutions. We study details of how to fix it. We don’t always win internal battles, of course.
  • But we exist. Talk to us. Other integrity workers have their own frameworks that are equally or more insightful. They’re wonderful people. Help us — help them — do their jobs and win the arguments inside companies.
  • PS — Join the Integrity Institute.

Categories
Misc

On the Tech Policy Press podcast

I forgot to mention this a while ago: Jeff and I were on a second fancy podcast when we launched. This time — Tech Policy Press with Justin Hendrix.

It was fun! Justin really understands these issues and asks good questions.

Plus, as a bonus, Aviv was brought on for part two. Worlds collide.

Categories
Misc

I’ll be on a panel in NYU on Dec 15th

Update: It went great! Here’s the recap link to watch it and get a summary

Here’s what the recap said about my part:

As a former Facebook employee, Sahar Massachi stressed how the organizational dynamics inside social media companies influence their products. For example, to increase profit, Facebook optimizes for metrics like growth and engagement, which often tend to fuel harmful content. Although platforms have integrity workers to help mitigate these harms, the focus on engagement often undercuts their efforts. Only by changing the incentives, he said, can we change how social media companies approach harm on their platforms. Massachi co-founded the Integrity Institute to build a community of integrity workers to support the public, policymakers, academics, journalists, and social media companies themselves as they try to solve the problems posed by social media.


So, as part of my work with the Integrity Institute, I get to be on a fancy panel.

Wanna come?

Here are the details, copied from the website:

Reducing Harm on Social Media: Research & Design Ideas

Wednesday, December 15, 2021  |  3:00 – 4:15pm ET

When social media platforms first launched nearly two decades ago, they were seen as a force for good – a way to connect with family and friends, learn and explore new ideas, and engage with social and political movements. Yet, as the Facebook Papers and other research have documented, these same platforms have become vectors of misinformation, hate speech, and polarization.

With attention around social media’s impact on society at an all-time high, this event gathers researchers and practitioners from across the academic, policy, and tech communities to discuss various approaches and interventions to make social media a safer and more civil place.

Panelists

  • Jane Lytvynenko, Senior Research Fellow, Technology and Social Change Project, Harvard Kennedy School’s Shorenstein Center on Media, Politics and Public Policy (moderator)
  • Niousha Roshani, Deputy Director, Content Policy & Society Lab, Stanford University’s Program on Democracy and the Internet
  • Rebekah Tromble, Director, Institute for Data, Democracy & Politics, George Washington University
  • Joshua A. Tucker, Co-Director, New York University’s Center for Social Media and Politics
  • Sahar Massachi, Co-Founder and Executive Director, Integrity Institute
Categories
Misc

I’m on the Lawfare Podcast

As part of the Integrity Institute rollout, Jeff and I were on the Lawfare podcast with Evelyn Douek and Quinta Jurecic. It actually turned out really good!

The editing was polished and lightweight enough that you can’t really tell that it was edited, but also thorough enough that we come across as crisper than we are in real life.

And we talked for an hour! I think it’s a good overview of what we’re thinking right now and how we see the world. Check it out, I’m proud of it.

https://www.lawfareblog.com/lawfare-podcast-what-integrity-social-media

Categories
Misc

A meta-proposal for Twitter’s bluesky project

My first-ever submission to SSRN was a success! Recently, I’ve gotten an email every day telling me that A meta-proposal for Twitter’s bluesky project is on the top-ten downloads for a ton of journals.

Officially I’m a co-author in the top 10 downloads in a bunch of SSRN topics

Namely: CompSciRN Subject Matter eJournals, CompSciRN: Other Web Technology (Topic), Computer Science Research Network, InfoSciRN Subject Matter eJournals, InfoSciRN: Information Architecture (Topic), InfoSciRN: Web Design & Development (Sub-Topic), Information & Library Science Research Network, Libraries & Information Technology eJournal and Web Technology eJournal.

This is a little less impressive than it sounds. But I’m getting a little ahead of myself. Here’s the story:

How did this all happen?

As a Berkman fellow, the main thing one seems to do is go to recurring meetings for a range of working groups. Jad Esber, one of my esteemed colleagues, got the idea and invitation to give a proposal to Twitter on their Bluesky project. He rounded up a bunch of us, and together we spent 5-6 meetings going over parts of what he called a “meta-proposal” — our guide on how to review the other different proposals coming in.

Jad is a wonderful person, and I learned some project management tips just from being part of this process. Getting a fair-sized collection of people to agree on a document, quickly, is difficult! As far as I remember, he did it like so:

  • The first meeting is to scope out different ideas people have about what they want to say.
  • Jad then writes excellent notes and combines ideas into a manageable number of topics.
  • Each meeting after this includes just the subset of the original crew who feel like they have something to contribute.
  • Jad, who has taken good notes throughout these meetings, polishes them up a bit, then turns it into a paper.

It was easy! It was so nice. And I got to work with people I really enjoy, including but not limited to Crystal Lee or Tom Zick

What the paper argues

The paper contains a bunch of ideas and warnings for a hypothetical new, decentralized social network. There are three big pillars: discover & curation, moderation, and business model. It’s quite short, so I recommend you just read all of it — it is barely 5 pages long.

I do care quite a bit about integrity issues (people often call them issues of “moderation”, which is wrong! More on this in a different post later). So I wanted to highlight this a bit.

Sidenote — what is integrity? Shorthand it to “hate speech, harassment, misinformation and other harms”, or “the problems of social media that come from users doing bad things to other users”.

Regarding curation: The most subtle proposal in here is around identifying the “idea neighborhoods” that someone might be hanging out in. (The paper calls them echo chambers). Why? Because “neighborhoods” are an important building block in identifying and fighting targeted harassment. If you know which neighborhood someone normally spends time in, you can be appropriately skeptical of them in times of stress. You can see a basic version of this in action on Reddit: if a certain post in /r/TwoXChromosomes gets a spike in harassing comments, it was pretty easy to block people who recently posted or commented in /r/mensrights.

(This is also fleshed out a bit in the moderation section as well)

On moderation: I’m tempted to block quote the whole thing. It’s all so clear, important, and succinct. And the key ideas to me are in the “friction” section, which is only 3 paragraphs. Summarizing it would take just as long as quoting. Okay, I can’t help myself. Here’s the section on friction (and a little preamble).

The role of moderation isn’t just restricting bad words or racist content. In designing the protocol and reviewing proposals, the conversation around moderation should center around restricting harassment & harm.

In considering the topic, the conversation should be framed under macro norms which are universal to the protocol; meso norms that are shared across certain clients of the protocol; and micro norms that are specific to a specific client.

Friction

It is well documented that our current systems that rely on the virality of user-generated content end up amplifying harmful content – and there is only so much that moderation efforts we tack on can do to mitigate this. In reviewing BlueSky proposals, we must engage with the question of virality and amplification and whether the protocol design avoids this.

Among the beauties and challenges of free flowing online space is the lack of physical boundaries. Traversing “geographies” by jumping from one conversation to another presents no restrictions. However, from a bad actor perspective, this presents an opportunity to scale harassment efforts and disrupt many events at once. Bluesky is an opportunity to “bring in more physics”, designing in friction on the protocol-level as a proactive way to avoid downstream moderation issues. Without getting into the complex issue of identity, increasing the cost of creating a new account, including introducing a monetary cost to start a new account, might be effective.

Enabling users to see which “neighborhood” other users are coming from could help users identify a provocateur and take action themselves. In addition to helping avoid brigading, ways of visibly ‘tagging’ users could help identify “sock-puppet accounts” and make bots easily identifiable. However, visibly tagging users could present the risk of short-circuiting judgments, and so the system should also present opportunities to identify any cross-cutting cleavages – for example by highlighting shared interests between users.

I’d say I couldn’t put it better myself, but, uh, there’s a reason for that. (That is, I feel a lot of ownership of it).