There’s an organization, All Tech Is Human. They’re pretty cool! At Integrity Institute, we’re figuring out how to be good organizational friends with them.
They asked me, and a bunch of other people, to answer some questions about technology and society. I like my answers. Here they are! And here’s the link to the full report. (Uploaded to the Internet Archive instead of Scribd — thanks Mek!)
In it, I try to keep the focus on people and power, rather than “tech”. Also, content moderation won’t save us, care must be taken with organizational design, and a cameo by the English Civil War. Plus — never forget Aaron Swartz. Let me know what you think!
Tell us about your current role:
I run the Integrity Institute. We are a think tank powered by a community of integrity professionals: tech workers who have on-platform experience mitigating the harms that can occur on or be caused by the social internet.
We formed the Integrity Institute to advance the theory and practice of protecting the social internet. We believe in a social internet that helps individuals, societies, and democracies thrive.
We know the systemic causes of problems on the social internet and
how to build platforms that mitigate or avoid them. We confronted issues such as misinformation, hate speech, election interference, and many more from the inside. We have seen successful and unsuccessful attempted solutions.
Our community supports the public, policymakers, academics, journalists, and technology companies themselves as they try to understand best practices and solutions to the challenges posed by social media.
In your opinion, what does a healthy relationship with technology look like?
Technology is a funny old word. We’ve been living with technology for thousands of years. Technology isn’t new; only its manifestation is. What did a healthy relationship to technology look like 50 years ago? 200 years ago?
Writing is a form of technology. Companies are a form of technology. Government is a form of technology. They’re all inventions we created to help humankind. They are marvelously constructive tools that unleash a lot of power, and a lot of potential to alleviate human suffering. Yet, in the wrong hands, they can do correspondingly more damage.
Technology should help individuals, societies, and democracy thrive. But it is a truism to say that technology should serve us, not the other way around. So let’s get a little bit more specific.
A healthy relationship to technology looks like a healthy relationship with powerful people. People, after all, own or control technology. Are they using it for social welfare? Are they using it democratically? Are they using it responsibly? Are they increasing human freedom, or diminishing it?
We will always have technology. Machines and humankind have always coexisted. The real danger is in other humans using those machines for evil (or neglect). Let’s not forget.
What individuals are doing inspiring work toward improving our tech future?
If we lived in a better world, Aaron Swartz would no doubt be on top of my list. Never forget.
If one person’s free speech is another’s harm and content moderation can never be perfect, what will it take to optimize human and algorithmic content moderation for tech users as well as policymakers? What steps are needed for optimal content moderation?
Well, first off, let’s not assume that content moderation is the best tool, here. All communications systems, even ones that have no ranking systems or recommendation algorithms, make implicit or explicit choices about affordances. That is, some behavior is rewarded, and some isn’t. Those choices are embedded in code and design. Things like: “How often can you post before it’s considered spam?” or “Can you direct-message people you haven’t met?” or “is there a reshare button?”
Default social platforms have those settings tuned to maximize engagement and growth — at the expense of quality. Sadly, it turns out, content that has high engagement tends to be, well, bad. The builders of those platforms chose to reward the wrong behavior, and so the wrong behavior runs rampant.
Fixing this can be done through technical tweaks. Things like feature limits, dampers to virality, and so on. But companies must set up internal systems so that engineers that make those changes are rewarded, not punished. If the companies that run platforms changed their internal incentive structures, then many of these problems would go away — before any content moderation would be needed.
We’ll always need some content moderators. But they should be a last resort, not a first line of defense.
How can we share information and best practices so that smaller platforms and startups can create ethical and human-centered systems at the design stage?
Thanks for this softball question! I think we’re doing that pretty well over at the Integrity Institute. We are a home for integrity professionals at all companies. Our first, biggest, and forever project has been building the community of people like us. In that community, people can swap tips, help each other learn best practices, and learn in a safe environment.
Drawing from that community, we brief startups, platforms, and other stakeholders on the emerging knowledge coming out of that community. We’re defining a new field, and it’s quite exciting.
Going more abstract, however, I think the problem is also one of defaults and larger systems. How easy is it for a startup to choose ethics over particularly egregious profits? How long will that startup survive (and how long will the CEO stay in charge)? The same goes for larger companies, of course.
Imagine a world where doing the right thing gets your company out-competed, or you personally fired. Pretty bleak, huh?
We’re trying to fix that, in part by enforcing an integrity Hippocratic oath. This would be a professional oath that all integrity workers swear by — to put the public interest first, to tell the truth, and more. But that’s only one small piece of the puzzle.
What makes YOU optimistic that we, as a society, can build a tech future aligned with our human values?
In 1649, the people of England put their king on trial, found him guilty of “unlimited and tyrannical power,” and cut off his head. I imagine this came as quite a shock to him. More interestingly, perhaps, I imagine that it came as a shock to the people themselves.
In extraordinary times, people — human beings — can come together to do things that seemed impossible, unthinkable, even sacrilegious just a few days before.
Within living memory in this country, schoolchildren were drilled to dive under desks due to threats of global nuclear Armageddon. Things must have seemed terrible. Yet, those children grew up, bore children, and made a gamble that the future would indeed be worth passing on to them. I think they were right.
We live in interesting times. That’s not necessarily a great thing: boring, stable, peaceful times have a lot going for them. It doesn’t seem like we have much of a choice, though. In interesting times, conditions can change quickly. Old ideas are shown to be hollow and toothless. Old institutions are exposed as rotten. The new world struggles to be born.
I look around and I see immense possibilities all around me. It could go very badly. We could absolutely come out of this worse than we came in. Anyone — any future — can come out on top. So, why not us? Why not team human?