Categories
Tech

Will AI chatbots be compromised like CNN, or compromised like Facebook?

Hey! Did you know I have at least one other blog? sahar.substack.com. Unclear how they’ll differ, but now it seems to be that it’s the more public/professional one (fewer mixtapes for Sarah, more thinkpieces).

So, I wrote a piece over there — Will AI chatbots be compromised like CNN, or compromised like Facebook? It’s about the ethical pressures that institutions/politics/governments put on companies. Mapping out how that played for journalism, for social media, and predicting how it might look for AI chatbots. Would love your thoughts on it. Thanks.


And here it is, pasted for posterity:

Social media went bad. Pressure from hostile governments contorted the product. Lies, slop, and propaganda. Scams, spam, and doomscroll content. The list goes on.

But — this is not humanity’s first rodeo. We’ve had big companies before. We’ve had big communications companies before. We’ve had international companies before. We’ve had competitive pressures and races to the bottom before. So what was new and scary about social media?

In my view, it was this: the combination of: communications company, international reach, and editorial decisions being embedded in software processes was new, and open to abuse. (Gee, what else fits that category?)

These companies are vulnerable to pressure from rogue governments, advertisers, or media demagogues. And the consequences of that are uniquely far-reaching.

International companies have run into pressure from the governments of other countries before. And honestly, they’ve been more directly evil than anything alleged about social media companies. Even Elon Musk’s Twitter doesn’t arm death squads. Meta has not poisoned tens of thousands of people to death in one night1. No tech company, to my knowledge, has bribed a military to massacre protestors they don’t like.2

To put it a bit bluntly — if Coca Cola did a bad thing because a government demanded bribes or whatever, that sucks for the residents of that country, but not the whole world.

This is even true for most media. Imagine if CNN censored themselves when covering a country because of financial, legal, PR pressure, etc.3 That would be very bad. But it would have a bounded effect: the stories that aren’t covered are just about that one country. The rest of the product would be okay.

Social media corruption hits different, for a few reasons. Some small, and one big.

The small reasons:

  • Social media is not seen as “journalism” with the norms and protections that implies. It’s easier, culturally, to get away with bad behavior.
  • No firewall between business and editorial that journalism has.
  • It’s a concentrated point of failure: there are only a few giant social media companies. Each is a bigger target for influence, and there’s less chance that other competitors will outcompete them on “resistance to censorship by foreign dictatorships”

The big reason:

When kowtowing to outside pressure, social media companies don’t just take down posts or even accounts or pages. They change the ranking systems of the product at scale to satisfy outside pressure.

That means that every time Meta scuppers plans to add a new anti-spam downrank because of backlash from Narendra Modi the whole world suffers from that lack of protection.

So things that would be the normal “soft” corruption in American business: preferential treatment for big clients, a hesitance to make big changes without seeing their impact on existing customers, an instinct to fold when attacked by a particular political party — are much worse here.

Compromises and tweaks to satisfy unjust power are no longer localized to one story or country — everywhere in the world gets the summation of all the downgraded protections from everywhere else.

This is really quite scary. And it’s unlike anything else I can think of — outside of tech.

Will this dark fate happen with LLMs?4

I’m oddly hopeful about AI avoiding this problem

Right now, people are indeed worried about “AI Bias”. (And, frankly, there are compelling reasons to be worried!) But — it’s inchoate. Firms are moving quickly, companies are new and comparatively lightly staffed, and the products change day to day. Things will change.

In the future, there will still be flashy fights about “<political party> thinks that <X AI product> is biased against them”. But more quietly, governments, political figures, firms, etc will threaten, behind the scenes. They’ll extract concessions. And we will be none the wiser.

So, that’s not great. But! I think the bad results will look more like Bloomberg form than the Facebook one. For two structural reasons.

First: AI companies are stronger than social media companies

Social media companies are weak. Especially Meta. The company isn’t popular. It is structurally vulnerable to pressureIt buckles easily. Social products generally have been moving away from wholesome in their brand positioning (friends!) to something odder. Maybe more like … necessary? (If you don’t watch our tiktoks you will be out of the loop and culturally isolated)

AI companies might be structurally stronger. They are newer and less tarnished consumer brands. They have a direct connection to the user, and direct moral ownership of content.5 Their value add is, in part, the quality of their responses, so monkeying with those responses for reasons beyond quality and usefulness is a direct hit to their brand and positioning.

Plus — their product is simpler to understand

Imagine that a social network updates their ranking system to steer people away from websites festooned with malware. Media demagogue X notices their weekly traffic declined by 20% and demands that it be rolled back. But normal people (and the press! and regulators!) would find it hard to detect either the change or the rollback. Meanwhile, imagine that a frontier lab updates their model, and in the course of many changes, the bot becomes more factual about a contentious event in the past. The change would be visible to everyone — but a demagogue would be hard pressed to demand the entire model refresh be undone.

AI chat companies are seen as much more responsible for what their models output, so they have less plausible wriggle room to say: we’re not debasing ourselves by censoring content a billionaire doesn’t like, we’re only changing some ranking weights.

AI can be lobotomized cleanly. (And that’s better than the alternative)

For social media — ranking rules are generally global, and apply to content from entities. So the pressure from the outside looks like “across the world, give more reach to these accounts / give less reach to content that looks like X”. Content is indeed banned or downranked — but entire algorithmic changes are also up for grabs. For chatbots, facts are the relevant objects to be tinkered with. Neural networks don’t (yet?) have an easy “these are the communist party officials you must boost content from” lever.

So rather than “Tucker Carlson will denounce the company on air unless you turn off your anti-spam protections” you get “Tucker Carlson will denounce the company on air unless you stop your model from talking about Bubba the Love Sponge”. Which, I think is better?

Topic-specific censorship is visible, testable, and contained. Whereas systemic corruption can be invisible, pervasive, and ruining much more than a few topics. As a user, I can route around known blind spots or obsessions an AI has, but subtle bias — much harder.6

This isn’t a super hopeful vision of the future. Our shared informational commons getting poisoned by determined intelligence agencies isn’t great! I guess what I’m saying is — it is better to have a model lobotomized about Tiananmen Square specifically than a model systematically pushing CCP propaganda broadly.

  1. Figurative poison? Maybe. Literal industrial chemicals? No. ↩︎
  2. And then execute others after a show trial? Yikes, Royal Dutch Shell. Eeek. ↩︎
  3. I gotta say, researching the links for this piece hit hard. I did not know about this scandal before today. Wow. ↩︎
  4. For now, I’ll focus on chatbots because they’re the most consumer-focused use case, but of course there’s a lot more to LLMs than chatbots ↩︎
  5. The greatest trick social media ever pulled was grabbing the credit for the good posts and deflecting blame for all the bad ones. Chatbots get all the credit, good and bad, for their outputs. ↩︎
  6. And topic-specific censorship or blind spots is, again, really bad! I don’t want to downplay it. ↩︎
Categories
Personal

Oh hey I got married

So, in case you missed it, I got married in late July / early August of 2023. I haven’t actually written too much about it publicly, just the bit I wrote here in Yenta.

I haven’t written about the honeymoon at all. It was delightful. Here are the topline ideas about the honeymoon:

  • We chose something easy and quiet to balance out the social and crowded week-long wedding festivity.
  • We stayed exclusively in old-fashioned bed-and-breakfasts
  • First, we went to the village of Gananoque, in Canada. It’s right by the Thousand Islands.
  • This has symbolic resonance because we had both been there on a road trip the day before we kissed for the first time.
  • We went kayaking, walked around town, and played a ton of Frosthaven.
  • Then we went to Stratford. It’s the home of the Stratford Shakespeare Festival, and the subject of a loving parody in Slings and Arrows. Years before we were dating, Sarah suggested I watch the show (it’s fantastic, an office comedy about people who work in a theatre, with the drama to match). It was my secret. “I have a crush on Sarah, let me remind myself by watching this niche TV show only she seems to knows about”.
  • The Shakespeare at Stratford was amazing. We even realized, by accident, that Paul Gross, the frontman of Slings and Arrows, was performing as King Lear. Wow!
  • Plus our BnB hostess was fantastic.
  • Plus lots of Frosthaven.
  • And lots of listening to Shakespeare as we drove a car for hours at a time to get to all these places.
  • It was delightful. Now you know!

And now, I wrote a longish retrospective that was framed as a set of tips for wedding planning. My wedding (and how to plan a great one).

It’s all on my long-dormant substack.

There’s a lot there, but here are just the topline tips:

  1. Food trucks! They solve so many problems.
  2. Understand this: the point of a wedding is to bring your people together and get them to understand why you should be married.
  3. Your wedding can be a week-long party where you show off your home.
  4. Community housing can be a key part of the experience.
  5. We got married outside, at a nature center
  6. We invested in great music
  7. Swords! (Invest in people getting to know each other, part 1)
  8. Secret Missions! (investing in introductions, part 2)
  9. The point of getting married is to help the world understand the relationship that you already have.
  10. Emailed (or texted) invitations are fine.
  11. Have a simple, relaxing, honeymoon
  12. Dress amazing, not formal
  13. Wedding rings don’t need to be stressful boring expensive and useless
  14. Redirect parent energy
  15. Get married in the early afternoon
  16. Replace vows with stories
  17. Children are great! Extra friends are great!
  18. Paradoxically: treat +1s with care
  19. Speeches are actually good — but space them out
  20. Have a special moment with everyone with this one weird trick.
  21. Don’t sweat the details. Many times, we told people, “if someone asks us what color napkins we want, then we are doing something horribly wrong”.

(Bonus: listen to tradition. Have your wedding on a Sunday.)

And what we learned:

  1. Plan earlier, and there’s no need to get overwhelmed.
  2. Use a CRM. Avoid WithJoy.
  3. You need a day-of captain
  4. You need an escape route
  5. Remember to schedule time and energy for thank you notes

Read the whole thing here (with photos!)

Lastly — I’ve been thinking about it, and I’d like to go to more weddings. Please invite me! I am a great guest. Fun dancer, gregarious, make friends with your friends. You won’t regret it.

Categories
Misc

The McDonald’s gambit

A long while ago I read about a concept the author called “the McDonald’s gambit”. Some web sleuthing couldn’t find it, and I use this idea all the time. So this is me recording it for future citation.

Imagine you’re with a group of people. Maybe friendly aquaintances, or coworkers. Maybe just you and your girlfriend. You’re trying to figure out where to go eat.

No one suggests an idea. There’s just silence. Probably because no one wants to impose their views on others — but in respecting people’s preferences they aren’t respecting their time or ability to speak up for themselves.

You need to break this logjam. You don’t need to propose an actually good idea. That’s kind of unhelpful, really. The point isn’t to impose your favorite restaurant on the group. The point is to start discussion. The tactic is to provoke them by making an outlandishly bad suggestion.

Say this — “how about we get McDonald’s?”

“McDonald’s!?” someone will cry out. “I mean we might as well go to Thai Thanic, that’s better than McDonald’s. Eww”.

Then someone will say “Oh, I don’t like Thai Thanic, let’s try Kumquat Kitchen”.

And presto! You’ve successfully jolted real preferences to be expressed. All through the power of the McDonalds Gambit.

(And of course, pro-tip, it doesn’t have to be just about food.)

Cross-posting to my substack