Categories
Misc

My on-camera debut

A few months ago, a camera team and a few reporters came to my home. They asked me a lot of questions! It took all day. I started out in a sweatshirt — after a few hours, I started sweating. But I had to keep it on, because of visual coherence. It was draining.

It was also scary. Was I saying the right things? Would I say something I regret? How do tell the truth as I see it without accidentally being hyperbolic, or inartful, or something else?

There was more than one reason I was sweating bullets throughout the whole thing.

I did it, though, because the reporting team was filming a documentary about social media, and they specifically wanted to talk to me. I felt like the national conversation was pretty simplistic, on the whole, and perhaps I could do my part in making it more sophisticated.

The show, Fault Lines, is also hosted on Al Jazeera, which I don’t love. (When you watch the show on YouTube, there will be a little disclaimer: “Al Jazeera is funded in whole or in part by the Qatari government”).

My on-camera time ended up being about 1 minute long, making pretty standard points. Something like: “virality is dangerous. You could change social media products to optimize for not just engagement and growth”. I hope the points I made during the other hours of footage helped nudge the overall project in a better direction.

Not sure how to feel, now that it’s over. I guess if nothing else, it was training for next time. Hopefully then it’ll be less scary.

Categories
Misc

A right-libertarian take on integrity work

Back in 2020, you might remember that I had yet to commit to integrity work as my big next focus of ideas and identity. What was I focused on instead? Political economy. Specifically, I was in the orbit of the lovely Law and Political Economy project. They’re great, check them out!

You might particularly remember that I went on one of my first ever podcast appearances, with my friend Kevin Wilson, Libertarian. We talked about a right-libertarian case for breaking up Facebook. It was fun!

Well, it’s been over a year since then, and I went back on his show. This time, I talked about Integrity Institute and some of my ideas for libertarian-friendly ways to do integrity work.

The title of the episode is: Can you fix social media by targeting behavior instead of speech? I really liked it. It was fun, nuanced, and far-ranging. We went so over time, that Kevin recorded a full bonus spillover episode going over the “how do you make this beautiful future actually happen”.

I’m told that for some of my biggest fans (aka my parents) this is their favorite podcast I’ve been on. Kevin does a great job asking questions that both give me time to sketch out a full answer, but also push me out of my comfort zone. Give it a listen.

Categories
Misc

Some thoughts on human experience design

There’s an organization, All Tech Is Human. They’re pretty cool! At Integrity Institute, we’re figuring out how to be good organizational friends with them.

They asked me, and a bunch of other people, to answer some questions about technology and society. I like my answers. Here they are! And here’s the link to the full report. (Uploaded to the Internet Archive instead of Scribd — thanks Mek!)

In it, I try to keep the focus on people and power, rather than “tech”. Also, content moderation won’t save us, care must be taken with organizational design, and a cameo by the English Civil War. Plus — never forget Aaron Swartz. Let me know what you think!

Tell us about your current role:

I run the Integrity Institute. We are a think tank powered by a community of integrity professionals: tech workers who have on-platform experience mitigating the harms that can occur on or be caused by the social internet.

We formed the Integrity Institute to advance the theory and practice of protecting the social internet. We believe in a social internet that helps individuals, societies, and democracies thrive.

We know the systemic causes of problems on the social internet and

how to build platforms that mitigate or avoid them. We confronted issues such as misinformation, hate speech, election interference, and many more from the inside. We have seen successful and unsuccessful attempted solutions.

Our community supports the public, policymakers, academics, journalists, and technology companies themselves as they try to understand best practices and solutions to the challenges posed by social media.

In your opinion, what does a healthy relationship with technology look like?

Technology is a funny old word. We’ve been living with technology for thousands of years. Technology isn’t new; only its manifestation is. What did a healthy relationship to technology look like 50 years ago? 200 years ago?

Writing is a form of technology. Companies are a form of technology. Government is a form of technology. They’re all inventions we created to help humankind. They are marvelously constructive tools that unleash a lot of power, and a lot of potential to alleviate human suffering. Yet, in the wrong hands, they can do correspondingly more damage.

Technology should help individuals, societies, and democracy thrive. But it is a truism to say that technology should serve us, not the other way around. So let’s get a little bit more specific.

A healthy relationship to technology looks like a healthy relationship with powerful people. People, after all, own or control technology. Are they using it for social welfare? Are they using it democratically? Are they using it responsibly? Are they increasing human freedom, or diminishing it?

We will always have technology. Machines and humankind have always coexisted. The real danger is in other humans using those machines for evil (or neglect). Let’s not forget.

What individuals are doing inspiring work toward improving our tech future?

If we lived in a better world, Aaron Swartz would no doubt be on top of my list. Never forget.

If one person’s free speech is another’s harm and content moderation can never be perfect, what will it take to optimize human and algorithmic content moderation for tech users as well as policymakers? What steps are needed for optimal content moderation?

Well, first off, let’s not assume that content moderation is the best tool, here. All communications systems, even ones that have no ranking systems or recommendation algorithms, make implicit or explicit choices about affordances. That is, some behavior is rewarded, and some isn’t. Those choices are embedded in code and design. Things like: “How often can you post before it’s considered spam?” or “Can you direct-message people you haven’t met?” or “is there a reshare button?”

Default social platforms have those settings tuned to maximize engagement and growth — at the expense of quality. Sadly, it turns out, content that has high engagement tends to be, well, bad. The builders of those platforms chose to reward the wrong behavior, and so the wrong behavior runs rampant.

Fixing this can be done through technical tweaks. Things like feature limits, dampers to virality, and so on. But companies must set up internal systems so that engineers that make those changes are rewarded, not punished. If the companies that run platforms changed their internal incentive structures, then many of these problems would go away — before any content moderation would be needed.

We’ll always need some content moderators. But they should be a last resort, not a first line of defense.

How can we share information and best practices so that smaller platforms and startups can create ethical and human-centered systems at the design stage?

Thanks for this softball question! I think we’re doing that pretty well over at the Integrity Institute. We are a home for integrity professionals at all companies. Our first, biggest, and forever project has been building the community of people like us. In that community, people can swap tips, help each other learn best practices, and learn in a safe environment.

Drawing from that community, we brief startups, platforms, and other stakeholders on the emerging knowledge coming out of that community. We’re defining a new field, and it’s quite exciting.

Going more abstract, however, I think the problem is also one of defaults and larger systems. How easy is it for a startup to choose ethics over particularly egregious profits? How long will that startup survive (and how long will the CEO stay in charge)? The same goes for larger companies, of course.

Imagine a world where doing the right thing gets your company out-competed, or you personally fired. Pretty bleak, huh?

We’re trying to fix that, in part by enforcing an integrity Hippocratic oath. This would be a professional oath that all integrity workers swear by — to put the public interest first, to tell the truth, and more. But that’s only one small piece of the puzzle.

What makes YOU optimistic that we, as a society, can build a tech future aligned with our human values?

In 1649, the people of England put their king on trial, found him guilty of “unlimited and tyrannical power,” and cut off his head. I imagine this came as quite a shock to him. More interestingly, perhaps, I imagine that it came as a shock to the people themselves.

In extraordinary times, people — human beings — can come together to do things that seemed impossible, unthinkable, even sacrilegious just a few days before.

Within living memory in this country, schoolchildren were drilled to dive under desks due to threats of global nuclear Armageddon. Things must have seemed terrible. Yet, those children grew up, bore children, and made a gamble that the future would indeed be worth passing on to them. I think they were right.

We live in interesting times. That’s not necessarily a great thing: boring, stable, peaceful times have a lot going for them. It doesn’t seem like we have much of a choice, though. In interesting times, conditions can change quickly. Old ideas are shown to be hollow and toothless. Old institutions are exposed as rotten. The new world struggles to be born.

I look around and I see immense possibilities all around me. It could go very badly. We could absolutely come out of this worse than we came in. Anyone — any future — can come out on top. So, why not us? Why not team human?

Categories
Misc

On the Tech Policy Press podcast

I forgot to mention this a while ago: Jeff and I were on a second fancy podcast when we launched. This time — Tech Policy Press with Justin Hendrix.

It was fun! Justin really understands these issues and asks good questions.

Plus, as a bonus, Aviv was brought on for part two. Worlds collide.

Categories
Misc

I’ll be on a panel in NYU on Dec 15th

Update: It went great! Here’s the recap link to watch it and get a summary

Here’s what the recap said about my part:

As a former Facebook employee, Sahar Massachi stressed how the organizational dynamics inside social media companies influence their products. For example, to increase profit, Facebook optimizes for metrics like growth and engagement, which often tend to fuel harmful content. Although platforms have integrity workers to help mitigate these harms, the focus on engagement often undercuts their efforts. Only by changing the incentives, he said, can we change how social media companies approach harm on their platforms. Massachi co-founded the Integrity Institute to build a community of integrity workers to support the public, policymakers, academics, journalists, and social media companies themselves as they try to solve the problems posed by social media.


So, as part of my work with the Integrity Institute, I get to be on a fancy panel.

Wanna come?

Here are the details, copied from the website:

Reducing Harm on Social Media: Research & Design Ideas

Wednesday, December 15, 2021  |  3:00 – 4:15pm ET

When social media platforms first launched nearly two decades ago, they were seen as a force for good – a way to connect with family and friends, learn and explore new ideas, and engage with social and political movements. Yet, as the Facebook Papers and other research have documented, these same platforms have become vectors of misinformation, hate speech, and polarization.

With attention around social media’s impact on society at an all-time high, this event gathers researchers and practitioners from across the academic, policy, and tech communities to discuss various approaches and interventions to make social media a safer and more civil place.

Panelists

  • Jane Lytvynenko, Senior Research Fellow, Technology and Social Change Project, Harvard Kennedy School’s Shorenstein Center on Media, Politics and Public Policy (moderator)
  • Niousha Roshani, Deputy Director, Content Policy & Society Lab, Stanford University’s Program on Democracy and the Internet
  • Rebekah Tromble, Director, Institute for Data, Democracy & Politics, George Washington University
  • Joshua A. Tucker, Co-Director, New York University’s Center for Social Media and Politics
  • Sahar Massachi, Co-Founder and Executive Director, Integrity Institute
Categories
Misc

I’m on the Lawfare Podcast

As part of the Integrity Institute rollout, Jeff and I were on the Lawfare podcast with Evelyn Douek and Quinta Jurecic. It actually turned out really good!

The editing was polished and lightweight enough that you can’t really tell that it was edited, but also thorough enough that we come across as crisper than we are in real life.

And we talked for an hour! I think it’s a good overview of what we’re thinking right now and how we see the world. Check it out, I’m proud of it.

https://www.lawfareblog.com/lawfare-podcast-what-integrity-social-media