Categories
Misc

Funding the mayors of reddit

If I were an eccentric billionaire, I’d fund “the reddit mayors” of the world.

Not just on reddit, of course. I’m talking about the people doing journalism and high quality work, in the place where people actually are. I’m thinking about the mods of /r/askreddit, or people making incredibly good longform youtube video, or the people who write really long, thoughtful comments consistently on whichever platform. I call them the mayors (and reporters) of the internet.

When there’s a world crisis, or a big spot of news, or even just on a random day in response to a viral video, look for the helpers. The people who set up megathreads, or crowdsource amazingly detailed annotated maps, or triage a beautiful wikipedia article. They’re doing valuable, unpaid, and important work journalism and civil society. They’re the stewards of truly giant communities. Often, very real communities.

It’s beautiful that they are doing it for free (and many have been for well over a decade). But that won’t last forever.

Let’s look at subreddit moderators as an example. They do work as a labor of love, and that’s amazing. But what happens when they burnt out? What happens when they get old, and “retire”? Who will replace them?

I worry that the people who will replace the founding “greatest generation online” will be motivated not by the aughts-era patriotism of The Internet, but by ideological and financial motives. Not because the next generation will be composed of worse people. It won’t! But because the value of capturing a subreddit, or of being a star wikipedia editor, is so high, that it’ll be very attractive for outside organizations to subsidize their own people to do it. And the motivations of those outside organizations won’t be pure.

As /u/qgyh2 and other mods of /r/worldnews retire, who will they choose as their successors? Presumably the people they’ve found to be helpful, civic-minded, amazingly productive, and a pleasure to work with. An intelligence agency (for example), has the resources and motivation to pay a person (or team of people) to be that helpful star recruit. Normal people wouldn’t be able to compete. And once that agent is in, then they have access to a lot of power they can abuse.

Imagine what an intelligence agency would do with the control of a chunk of the default news ecosystem of tends of millions of people. Iran pushing articles in /r/worldnews that embarrass Israel or the US. India getting their mods in to push anti-muslim, or anti-China articles. Heck, imagine what a company would do. It doesn’t need to get outlandish — imagine Tesla secretly placing mods in control of /r/technology, or Sony eventually gaining control of top wikipedia editors.

It doesn’t take a lot of money to do this sabotage. Just some labor costs and patience.

That’s why we need an eccentric billionaire to stop this from happening. All they need to do is start paying a basic income to the mayors of reddit (and Wikipedia, and perhaps other platforms). Suddenly, we’re no longer depending on the goodwill of volunteers as our thin blue line. Suddenly, we have inoculated moderators from many of the temptations of corruption. And if that funding is stable and committed, potential future moderators can devote more time to doing good work, because they know there’s a payoff at the end.

There are still pockets of the good-spirited, volunteer internet left. They underpin so much of our society. But remember Heartbleed? Turns out that OpenSSL, a key component of a secure internet, used by billions of people and untold software projects was actually just maintained by two people. That system “worked” — until it didn’t. To disastrous effect. And now open source funding is a little bit better.

I don’t think we will have a dramatic wake up call for the mayors of the internet like we did with Heartbleed. Instead, things will get worse and worse, gradually and subtly. Until one day we look around and see that the last pockets of the civic-minded web have been corrupted away.

Categories
Misc

Some thoughts on human experience design

There’s an organization, All Tech Is Human. They’re pretty cool! At Integrity Institute, we’re figuring out how to be good organizational friends with them.

They asked me, and a bunch of other people, to answer some questions about technology and society. I like my answers. Here they are! And here’s the link to the full report. (Uploaded to the Internet Archive instead of Scribd — thanks Mek!)

In it, I try to keep the focus on people and power, rather than “tech”. Also, content moderation won’t save us, care must be taken with organizational design, and a cameo by the English Civil War. Plus — never forget Aaron Swartz. Let me know what you think!

Tell us about your current role:

I run the Integrity Institute. We are a think tank powered by a community of integrity professionals: tech workers who have on-platform experience mitigating the harms that can occur on or be caused by the social internet.

We formed the Integrity Institute to advance the theory and practice of protecting the social internet. We believe in a social internet that helps individuals, societies, and democracies thrive.

We know the systemic causes of problems on the social internet and

how to build platforms that mitigate or avoid them. We confronted issues such as misinformation, hate speech, election interference, and many more from the inside. We have seen successful and unsuccessful attempted solutions.

Our community supports the public, policymakers, academics, journalists, and technology companies themselves as they try to understand best practices and solutions to the challenges posed by social media.

In your opinion, what does a healthy relationship with technology look like?

Technology is a funny old word. We’ve been living with technology for thousands of years. Technology isn’t new; only its manifestation is. What did a healthy relationship to technology look like 50 years ago? 200 years ago?

Writing is a form of technology. Companies are a form of technology. Government is a form of technology. They’re all inventions we created to help humankind. They are marvelously constructive tools that unleash a lot of power, and a lot of potential to alleviate human suffering. Yet, in the wrong hands, they can do correspondingly more damage.

Technology should help individuals, societies, and democracy thrive. But it is a truism to say that technology should serve us, not the other way around. So let’s get a little bit more specific.

A healthy relationship to technology looks like a healthy relationship with powerful people. People, after all, own or control technology. Are they using it for social welfare? Are they using it democratically? Are they using it responsibly? Are they increasing human freedom, or diminishing it?

We will always have technology. Machines and humankind have always coexisted. The real danger is in other humans using those machines for evil (or neglect). Let’s not forget.

What individuals are doing inspiring work toward improving our tech future?

If we lived in a better world, Aaron Swartz would no doubt be on top of my list. Never forget.

If one person’s free speech is another’s harm and content moderation can never be perfect, what will it take to optimize human and algorithmic content moderation for tech users as well as policymakers? What steps are needed for optimal content moderation?

Well, first off, let’s not assume that content moderation is the best tool, here. All communications systems, even ones that have no ranking systems or recommendation algorithms, make implicit or explicit choices about affordances. That is, some behavior is rewarded, and some isn’t. Those choices are embedded in code and design. Things like: “How often can you post before it’s considered spam?” or “Can you direct-message people you haven’t met?” or “is there a reshare button?”

Default social platforms have those settings tuned to maximize engagement and growth — at the expense of quality. Sadly, it turns out, content that has high engagement tends to be, well, bad. The builders of those platforms chose to reward the wrong behavior, and so the wrong behavior runs rampant.

Fixing this can be done through technical tweaks. Things like feature limits, dampers to virality, and so on. But companies must set up internal systems so that engineers that make those changes are rewarded, not punished. If the companies that run platforms changed their internal incentive structures, then many of these problems would go away — before any content moderation would be needed.

We’ll always need some content moderators. But they should be a last resort, not a first line of defense.

How can we share information and best practices so that smaller platforms and startups can create ethical and human-centered systems at the design stage?

Thanks for this softball question! I think we’re doing that pretty well over at the Integrity Institute. We are a home for integrity professionals at all companies. Our first, biggest, and forever project has been building the community of people like us. In that community, people can swap tips, help each other learn best practices, and learn in a safe environment.

Drawing from that community, we brief startups, platforms, and other stakeholders on the emerging knowledge coming out of that community. We’re defining a new field, and it’s quite exciting.

Going more abstract, however, I think the problem is also one of defaults and larger systems. How easy is it for a startup to choose ethics over particularly egregious profits? How long will that startup survive (and how long will the CEO stay in charge)? The same goes for larger companies, of course.

Imagine a world where doing the right thing gets your company out-competed, or you personally fired. Pretty bleak, huh?

We’re trying to fix that, in part by enforcing an integrity Hippocratic oath. This would be a professional oath that all integrity workers swear by — to put the public interest first, to tell the truth, and more. But that’s only one small piece of the puzzle.

What makes YOU optimistic that we, as a society, can build a tech future aligned with our human values?

In 1649, the people of England put their king on trial, found him guilty of “unlimited and tyrannical power,” and cut off his head. I imagine this came as quite a shock to him. More interestingly, perhaps, I imagine that it came as a shock to the people themselves.

In extraordinary times, people — human beings — can come together to do things that seemed impossible, unthinkable, even sacrilegious just a few days before.

Within living memory in this country, schoolchildren were drilled to dive under desks due to threats of global nuclear Armageddon. Things must have seemed terrible. Yet, those children grew up, bore children, and made a gamble that the future would indeed be worth passing on to them. I think they were right.

We live in interesting times. That’s not necessarily a great thing: boring, stable, peaceful times have a lot going for them. It doesn’t seem like we have much of a choice, though. In interesting times, conditions can change quickly. Old ideas are shown to be hollow and toothless. Old institutions are exposed as rotten. The new world struggles to be born.

I look around and I see immense possibilities all around me. It could go very badly. We could absolutely come out of this worse than we came in. Anyone — any future — can come out on top. So, why not us? Why not team human?

Categories
Misc

Social media that helps your friendships blossom

On Facebook, a few days ago, I noticed a weird trend. All of a sudden, I’d been getting a new type of notification. I posted about it, and got a ton of replies:

For years, inside of facebook, I argued that the app could help deepen friendships instead of just cataloging them. What about a “people who used to be close to, who you haven’t messaged [or commented on their posts] for a while, feature”? How about proactively helping heal cross-cutting cleavages by reminding you that you’re friends with people of identity X?

I have no inside knowledge here, but something weird has been happening on my facebook lately. I keep getting notifications that “person X has posted”, where person X keeps changing. Is someone on the inside finally trying to make it happen?

But this new feature has problems. It’s a good idea, but I’m not sure it’s implemented well. Why are these notifications and not feed units? If you change the behavior of the app, you’d want the initial interactions with the new feature to be of high quality, yet they typically link to low-quality posts. And rather than an invitation to reconnect with a person, they are an invitation to view that person’s posts, with no explanation.

Typically, when fb notifications start pushing something that isn’t directly tied to me (“person X commented on your post, Y people liked a post”) I click the ignore button a few times. Then the system learns, and they stop. It’s been over a week, and these notification units keep coming. Either I’m in a half-baked A/B test, or someone really, really, is pushing this new feature. If I’m right, I salute the impulse. But the implementation is not ready for prime time.

Is it just me? Am I the only one seeing these? Or are y’all getting this too?

FB post here

Kushaan even tweeted it out.

The whole episode got me thinking. Can I break out of my normal habits and use, say, Facebook, in ways that make me happier? I already cut out all pages and groups, but maybe I could do more.

So I spent ten minutes looking through stories on FB Messenger / IG, and replying enthusiastically to slices of life from old friends. It was … invigorating. It’s easy for me to type up thoughts. But maybe the real key to internet happiness is just cooing over a cute baby.

In that vein, here’s a picture of Sarah and me dressing up for new years, right before we started an epic battle in Gloomhaven. No big idea, just a little glimpse of a life.