Hey! Did you know I have at least one other blog? sahar.substack.com. Unclear how they’ll differ, but now it seems to be that it’s the more public/professional one (fewer mixtapes for Sarah, more thinkpieces).
So, I wrote a piece over there — Will AI chatbots be compromised like CNN, or compromised like Facebook? It’s about the ethical pressures that institutions/politics/governments put on companies. Mapping out how that played for journalism, for social media, and predicting how it might look for AI chatbots. Would love your thoughts on it. Thanks.
And here it is, pasted for posterity:
Social media went bad. Pressure from hostile governments contorted the product. Lies, slop, and propaganda. Scams, spam, and doomscroll content. The list goes on.
But — this is not humanity’s first rodeo. We’ve had big companies before. We’ve had big communications companies before. We’ve had international companies before. We’ve had competitive pressures and races to the bottom before. So what was new and scary about social media?
In my view, it was this: the combination of: communications company, international reach, and editorial decisions being embedded in software processes was new, and open to abuse. (Gee, what else fits that category?)
These companies are vulnerable to pressure from rogue governments, advertisers, or media demagogues. And the consequences of that are uniquely far-reaching.
International companies have run into pressure from the governments of other countries before. And honestly, they’ve been more directly evil than anything alleged about social media companies. Even Elon Musk’s Twitter doesn’t arm death squads. Meta has not poisoned tens of thousands of people to death in one night1. No tech company, to my knowledge, has bribed a military to massacre protestors they don’t like.2
To put it a bit bluntly — if Coca Cola did a bad thing because a government demanded bribes or whatever, that sucks for the residents of that country, but not the whole world.
This is even true for most media. Imagine if CNN censored themselves when covering a country because of financial, legal, PR pressure, etc.3 That would be very bad. But it would have a bounded effect: the stories that aren’t covered are just about that one country. The rest of the product would be okay.
Social media corruption hits different, for a few reasons. Some small, and one big.
The small reasons:
- Social media is not seen as “journalism” with the norms and protections that implies. It’s easier, culturally, to get away with bad behavior.
- No firewall between business and editorial that journalism has.
- It’s a concentrated point of failure: there are only a few giant social media companies. Each is a bigger target for influence, and there’s less chance that other competitors will outcompete them on “resistance to censorship by foreign dictatorships”
The big reason:
When kowtowing to outside pressure, social media companies don’t just take down posts or even accounts or pages. They change the ranking systems of the product at scale to satisfy outside pressure.
That means that every time Meta scuppers plans to add a new anti-spam downrank because of backlash from Narendra Modi the whole world suffers from that lack of protection.
So things that would be the normal “soft” corruption in American business: preferential treatment for big clients, a hesitance to make big changes without seeing their impact on existing customers, an instinct to fold when attacked by a particular political party — are much worse here.
Compromises and tweaks to satisfy unjust power are no longer localized to one story or country — everywhere in the world gets the summation of all the downgraded protections from everywhere else.
This is really quite scary. And it’s unlike anything else I can think of — outside of tech.
Will this dark fate happen with LLMs?4
I’m oddly hopeful about AI avoiding this problem
Right now, people are indeed worried about “AI Bias”. (And, frankly, there are compelling reasons to be worried!) But — it’s inchoate. Firms are moving quickly, companies are new and comparatively lightly staffed, and the products change day to day. Things will change.
In the future, there will still be flashy fights about “<political party> thinks that <X AI product> is biased against them”. But more quietly, governments, political figures, firms, etc will threaten, behind the scenes. They’ll extract concessions. And we will be none the wiser.
So, that’s not great. But! I think the bad results will look more like Bloomberg form than the Facebook one. For two structural reasons.
First: AI companies are stronger than social media companies
Social media companies are weak. Especially Meta. The company isn’t popular. It is structurally vulnerable to pressure. It buckles easily. Social products generally have been moving away from wholesome in their brand positioning (friends!) to something odder. Maybe more like … necessary? (If you don’t watch our tiktoks you will be out of the loop and culturally isolated)
AI companies might be structurally stronger. They are newer and less tarnished consumer brands. They have a direct connection to the user, and direct moral ownership of content.5 Their value add is, in part, the quality of their responses, so monkeying with those responses for reasons beyond quality and usefulness is a direct hit to their brand and positioning.
Plus — their product is simpler to understand
Imagine that a social network updates their ranking system to steer people away from websites festooned with malware. Media demagogue X notices their weekly traffic declined by 20% and demands that it be rolled back. But normal people (and the press! and regulators!) would find it hard to detect either the change or the rollback. Meanwhile, imagine that a frontier lab updates their model, and in the course of many changes, the bot becomes more factual about a contentious event in the past. The change would be visible to everyone — but a demagogue would be hard pressed to demand the entire model refresh be undone.
AI chat companies are seen as much more responsible for what their models output, so they have less plausible wriggle room to say: we’re not debasing ourselves by censoring content a billionaire doesn’t like, we’re only changing some ranking weights.
AI can be lobotomized cleanly. (And that’s better than the alternative)
For social media — ranking rules are generally global, and apply to content from entities. So the pressure from the outside looks like “across the world, give more reach to these accounts / give less reach to content that looks like X”. Content is indeed banned or downranked — but entire algorithmic changes are also up for grabs. For chatbots, facts are the relevant objects to be tinkered with. Neural networks don’t (yet?) have an easy “these are the communist party officials you must boost content from” lever.
So rather than “Tucker Carlson will denounce the company on air unless you turn off your anti-spam protections” you get “Tucker Carlson will denounce the company on air unless you stop your model from talking about Bubba the Love Sponge”. Which, I think is better?
Topic-specific censorship is visible, testable, and contained. Whereas systemic corruption can be invisible, pervasive, and ruining much more than a few topics. As a user, I can route around known blind spots or obsessions an AI has, but subtle bias — much harder.6
This isn’t a super hopeful vision of the future. Our shared informational commons getting poisoned by determined intelligence agencies isn’t great! I guess what I’m saying is — it is better to have a model lobotomized about Tiananmen Square specifically than a model systematically pushing CCP propaganda broadly.
- Figurative poison? Maybe. Literal industrial chemicals? No. ↩︎
- And then execute others after a show trial? Yikes, Royal Dutch Shell. Eeek. ↩︎
- I gotta say, researching the links for this piece hit hard. I did not know about this scandal before today. Wow. ↩︎
- For now, I’ll focus on chatbots because they’re the most consumer-focused use case, but of course there’s a lot more to LLMs than chatbots ↩︎
- The greatest trick social media ever pulled was grabbing the credit for the good posts and deflecting blame for all the bad ones. Chatbots get all the credit, good and bad, for their outputs. ↩︎
- And topic-specific censorship or blind spots is, again, really bad! I don’t want to downplay it. ↩︎