Categories
Misc

Questions about “Web3” and “Content Moderation”

I moderated a panel at Unfinished Live a couple weeks ago. The panel was not recorded. The day’s topic was “Web3”, and the panel topic was chosen for me: Content Moderation.

Now, I really don’t like the framing of content moderation. (Or Trust and Safety). Oh well.

Here are the questions I led the session description with:

How might the traditional process of moderating content and behavior online look in a truly decentralized Web3 world? A shared protocol might be harder to edit than a central codebase; distributed data might be harder to change. However, new capabilities (like smart contracts) might improve aspects of moderation work. How might the theory and practice of integrity in Web3 compare to the systems we are accustomed to today?

And here are the (hopefully challenging) advanced questions I tried to ask:

  • One argument is that content moderation is really one manifestation of larger questions: how should platforms or protocols be designed? What are the terms of service, and how are they enforced? In short, these are questions of governance. Do you agree? Do you think narrow questions of enforcing terms of service can be separated from these larger questions?
  • As I see it, when it comes to writing and enforcing terms of service, there are two proposed alternatives to platform *dictatorship*: democratization, and decentralization. On the surface, decentralization and democratization seem opposed: a world where “the users vote to ban nazi content” conflicts with a world where “you can choose to see or not see nazi content as you like”. Must they be opposed? How are they complements vs two opposing visions?
  • One thing I keep coming back to in this work is a chart that Mark Zuckerberg (or his ghostwriter) of all people, put out, back in 2018. It’s a pretty simple chart, and it’s an abstract one: as content gets closer to “policy violating”, engagement goes up. That is, people have a natural tendency to gravitate towards bad things — where bad: could be hateful, misinformation, calls to violence, what have you. Colloquially, you can think about this as during the web1 era of forums: flame wars would get a ton of engagement, almost by definition. The corollary to this insight is that the _design_ of the experience matters a ton. You want care put into creating a system where good behavior is incentivized and bad behavior is not. If we’re focused on a model of either decentralized or democratized content moderation, aren’t we distracted from the real power: the design of the protocol or platform?
  • In thinking through governance, it seems like there’s a question of where legitimacy and values might be “anchored”, as it were. On one hand, it seems like we generally want to respect the laws and judgement of democratic countries. On the other, we want to design platforms that are resistant to surveillance, censorship, and control of unfriendly authoritarian countries. It seems like an impossible design question: make something resilient to bad governments, but accountable to good ones. Is this in fact impossible? Is the answer to somehow categorize laws or countries that are “to be respected” vs those “to be resisted?” To only operate in a few countries? To err more fully on the side of “cyber independence by design” or on the side of “we follow all laws in every country”?

In the end, it was a pretty fun panel. I think we drifted away from “content moderation” straight towards governance (which was supposedly a different panel). Governance being “who decides community standards?”. I think that’s because we all agreed that any work enforcing community standards is downstream of the rules as written, and the resourcing to actually do your job. So that was nice.

Made some friends (I hope!) too.

Categories
Misc

“Integrity as city planning” meets actual city planners

This one is fun. This one is really fun.

You may remember that a while ago I published my big piece on Governing the city of atomic supermen in MIT Tech Review. I really liked it, the world seemed to like it, it was a big deal! The central conceit of the piece is that social media is like a new kind of city, and that integrity work is a type of new city planning.

So! There’s a community of people who are obsessed with actual, real, cities. One of them, Jeff Wood of The Overhead Wire, reached out to me, and we had an amazing conversation. Him from the city planner / city advocate world, me from the internet.

You might think that this gimmick would only last for about 20 minutes of conversation, and then we’d run out of things to talk about. That’s reasonable, but it turns out you’re wrong! We just kept talking, and the longer we went, the more interesting it got.

I can’t think of a more fun or more deep podcast episode I’ve done. If you haven’t listened to any yet, this is the one to check out.

LINK

https://usa.streetsblog.org/2022/02/10/talking-headways-podcast-treating-social-media-like-a-city/

We talked about fun new things like:

  • To what extent is social media like the mass adoption of the automobile?
  • Are company growth metrics the analogue of “vehicle miles traveled” goals/grants by the Department of Transportation?
  • Is there a coming collapse of rotten social networks due to all the spam and bots? Is that like climate change?
  • I learned a lot about hot new topics in urbanism! Like the four-step model.
  • Induced demand in freeways as an analogue to bad faith accusations of “censorship” when social media companies try to crack down on abuse.
  • Path dependency is a hell of a drug.
  • Corruption, the history of asphalt, and ethics in social media / city governance. Building code corruption and “lets bend the rules for our large advertisers” corruption.

My quick notes on the conversation:

  • First 14 minutes or so: Intro to me, integrity design, theory of integrity. Mostly stuff you might have heard before elsewhere.
  • Minutes 14 – 23: Do you actually need to bake in integrity design from the beginning? How is growing a social app similar to (or not) growing a city from a village? Online vs in-person social behavior.
  • Minute 19: A lot of the work has shaded into organizational design. What I imagine they teach you in MBA school. How to set up an organization with the right incentives.

The growth of a city is in some sense bounded by the number of homes you can build in a period of time, right? You’re not going to see a club of 15 artists turn into a metropolis of 2 million people in the span of two weeks. It’s just physically impossible to do it. And that gives people some human-scale time to figure out the emerging problems and have some time to experiment with solutions as the city grows. And that’s a sort of growth. That’s a story about the growth of a small platform to a big one, but it’s also the same kind of thing of just how lies are spread, how hate speech is spread — any sort of behavior.

Minute 22
  • Minute 24: Power users of social media. Power users of automobiles. How are they similar and differnet?
  • Minute 30: The reason spam is a solved* problem on email is that the email providers have a sort of beneficient cartel. (Before Evelyn Douek corrects me — “solved” in the sense that we’re not having a panic about how gmail is destroying society, or that outlook’s spam filter isn’t working)
  • Minute 35: Jeff Wood brings up a new metaphor. “20 is plenty” (as a speed limit for cars). How well does it work for online?
  • Minute 40: My pet metaphor for integrity work — platforms are often a gravity well that incentives bad behavior. Doing the wrong thing feels like walking downhill, doing the right thing takes effort.
  • Minute 41-45: Vehicle Miles Traveled, the 4-step model, departments of transportation. Cars and social media and bad metrics. Bad metrics -> bad choices
  • Minute 46 – 51: If at first you don’t do the right thing, then you try to do the right thing, then people will complain. Whether its the suburban sprawl or not cracking down on spammers. They’ll act all righteous and go yell in public meetings. But in the end they did something wrong (in the social media case) or were receiving an unjust subsidy that you’re finally removing (in both cases).
  • Minute 53 – 58: We’ve been talking design here. But let’s not forget actual, literal corruption.
  • Minutes 58 onwards: Ending

These notes don’t do it justice. It was just such a delight. Grateful to Jeff Wood for a great conversation.

Categories
Misc

A right-libertarian take on integrity work

Back in 2020, you might remember that I had yet to commit to integrity work as my big next focus of ideas and identity. What was I focused on instead? Political economy. Specifically, I was in the orbit of the lovely Law and Political Economy project. They’re great, check them out!

You might particularly remember that I went on one of my first ever podcast appearances, with my friend Kevin Wilson, Libertarian. We talked about a right-libertarian case for breaking up Facebook. It was fun!

Well, it’s been over a year since then, and I went back on his show. This time, I talked about Integrity Institute and some of my ideas for libertarian-friendly ways to do integrity work.

The title of the episode is: Can you fix social media by targeting behavior instead of speech? I really liked it. It was fun, nuanced, and far-ranging. We went so over time, that Kevin recorded a full bonus spillover episode going over the “how do you make this beautiful future actually happen”.

I’m told that for some of my biggest fans (aka my parents) this is their favorite podcast I’ve been on. Kevin does a great job asking questions that both give me time to sketch out a full answer, but also push me out of my comfort zone. Give it a listen.

Categories
Misc

Some thoughts on human experience design

There’s an organization, All Tech Is Human. They’re pretty cool! At Integrity Institute, we’re figuring out how to be good organizational friends with them.

They asked me, and a bunch of other people, to answer some questions about technology and society. I like my answers. Here they are! And here’s the link to the full report. (Uploaded to the Internet Archive instead of Scribd — thanks Mek!)

In it, I try to keep the focus on people and power, rather than “tech”. Also, content moderation won’t save us, care must be taken with organizational design, and a cameo by the English Civil War. Plus — never forget Aaron Swartz. Let me know what you think!

Tell us about your current role:

I run the Integrity Institute. We are a think tank powered by a community of integrity professionals: tech workers who have on-platform experience mitigating the harms that can occur on or be caused by the social internet.

We formed the Integrity Institute to advance the theory and practice of protecting the social internet. We believe in a social internet that helps individuals, societies, and democracies thrive.

We know the systemic causes of problems on the social internet and

how to build platforms that mitigate or avoid them. We confronted issues such as misinformation, hate speech, election interference, and many more from the inside. We have seen successful and unsuccessful attempted solutions.

Our community supports the public, policymakers, academics, journalists, and technology companies themselves as they try to understand best practices and solutions to the challenges posed by social media.

In your opinion, what does a healthy relationship with technology look like?

Technology is a funny old word. We’ve been living with technology for thousands of years. Technology isn’t new; only its manifestation is. What did a healthy relationship to technology look like 50 years ago? 200 years ago?

Writing is a form of technology. Companies are a form of technology. Government is a form of technology. They’re all inventions we created to help humankind. They are marvelously constructive tools that unleash a lot of power, and a lot of potential to alleviate human suffering. Yet, in the wrong hands, they can do correspondingly more damage.

Technology should help individuals, societies, and democracy thrive. But it is a truism to say that technology should serve us, not the other way around. So let’s get a little bit more specific.

A healthy relationship to technology looks like a healthy relationship with powerful people. People, after all, own or control technology. Are they using it for social welfare? Are they using it democratically? Are they using it responsibly? Are they increasing human freedom, or diminishing it?

We will always have technology. Machines and humankind have always coexisted. The real danger is in other humans using those machines for evil (or neglect). Let’s not forget.

What individuals are doing inspiring work toward improving our tech future?

If we lived in a better world, Aaron Swartz would no doubt be on top of my list. Never forget.

If one person’s free speech is another’s harm and content moderation can never be perfect, what will it take to optimize human and algorithmic content moderation for tech users as well as policymakers? What steps are needed for optimal content moderation?

Well, first off, let’s not assume that content moderation is the best tool, here. All communications systems, even ones that have no ranking systems or recommendation algorithms, make implicit or explicit choices about affordances. That is, some behavior is rewarded, and some isn’t. Those choices are embedded in code and design. Things like: “How often can you post before it’s considered spam?” or “Can you direct-message people you haven’t met?” or “is there a reshare button?”

Default social platforms have those settings tuned to maximize engagement and growth — at the expense of quality. Sadly, it turns out, content that has high engagement tends to be, well, bad. The builders of those platforms chose to reward the wrong behavior, and so the wrong behavior runs rampant.

Fixing this can be done through technical tweaks. Things like feature limits, dampers to virality, and so on. But companies must set up internal systems so that engineers that make those changes are rewarded, not punished. If the companies that run platforms changed their internal incentive structures, then many of these problems would go away — before any content moderation would be needed.

We’ll always need some content moderators. But they should be a last resort, not a first line of defense.

How can we share information and best practices so that smaller platforms and startups can create ethical and human-centered systems at the design stage?

Thanks for this softball question! I think we’re doing that pretty well over at the Integrity Institute. We are a home for integrity professionals at all companies. Our first, biggest, and forever project has been building the community of people like us. In that community, people can swap tips, help each other learn best practices, and learn in a safe environment.

Drawing from that community, we brief startups, platforms, and other stakeholders on the emerging knowledge coming out of that community. We’re defining a new field, and it’s quite exciting.

Going more abstract, however, I think the problem is also one of defaults and larger systems. How easy is it for a startup to choose ethics over particularly egregious profits? How long will that startup survive (and how long will the CEO stay in charge)? The same goes for larger companies, of course.

Imagine a world where doing the right thing gets your company out-competed, or you personally fired. Pretty bleak, huh?

We’re trying to fix that, in part by enforcing an integrity Hippocratic oath. This would be a professional oath that all integrity workers swear by — to put the public interest first, to tell the truth, and more. But that’s only one small piece of the puzzle.

What makes YOU optimistic that we, as a society, can build a tech future aligned with our human values?

In 1649, the people of England put their king on trial, found him guilty of “unlimited and tyrannical power,” and cut off his head. I imagine this came as quite a shock to him. More interestingly, perhaps, I imagine that it came as a shock to the people themselves.

In extraordinary times, people — human beings — can come together to do things that seemed impossible, unthinkable, even sacrilegious just a few days before.

Within living memory in this country, schoolchildren were drilled to dive under desks due to threats of global nuclear Armageddon. Things must have seemed terrible. Yet, those children grew up, bore children, and made a gamble that the future would indeed be worth passing on to them. I think they were right.

We live in interesting times. That’s not necessarily a great thing: boring, stable, peaceful times have a lot going for them. It doesn’t seem like we have much of a choice, though. In interesting times, conditions can change quickly. Old ideas are shown to be hollow and toothless. Old institutions are exposed as rotten. The new world struggles to be born.

I look around and I see immense possibilities all around me. It could go very badly. We could absolutely come out of this worse than we came in. Anyone — any future — can come out on top. So, why not us? Why not team human?

Categories
Misc

Integrity work is hard because of core company metrics

This is a quick and dirty little post — I tried to explain a theory of integrity to a friend via a series of texts. Wonder what you think of it:

Everyone is asking “how do I understand feeds and algorithms?”. Well, luckily we don’t have to start from scratch. How do the companies themselves understand these systems that they created?

They do it through metrics. Every time a change is teed up, it’s tested in a randomized controlled trial. By comparing the changed metrics from control vs the new feature, they get a sense of how well the feature does.

Those changed metrics are the skeleton key to understanding these companies. Each team has their own particular metrics, but the entire company shares a set of top metrics — every experiment in every team is evaluated in regards to those company metrics. Those core metrics matter. The top metrics generally measure two things — growth, and engagement. Let’s simplify for the moment and shorthand it to “growth” for now.

We can think of the news feed (or twitter feed, or whatever) as being shaped by a *search*. To simplify just a bit, engineers are turning knobs of settings slightly, then checking the output — did growth go up? It’s hill-climbing. Just a slower process of what machine learning is — finding local optima in n-dimensional space. We can think of the entire platform as being shaped by that same search — not just the ranking algorithms but the design choices of features themselves!

The job of an integrity team is to *not* optimize on that metric. In a heavily optimized platform, that means that to do their work well, they’ll almost always have to erode growth somewhat. (Again, it’s not *necessarily* true, but in a world that is heavily optimized, that means that every setting is tuned perfectly to only growth). Imagine that they’re able to successfully fight the internal battles to be able to make the change that moves the company off the top of that hill. Now, every other team is heavily incentivized to roll back those changes and move back up that hill.

It doesn’t have to be conscious — often it isn’t! It’s just that there’s a juicy ability to get lots of growth impact by moving the settings back. They won’t necessarily even know they’re doing it — but it’ll probably happen.

This is why integrity work is so hard — and why organizational design needs to be part of the discussion.

Categories
Misc

Governing the city of atomic supermen

Social media is a new city, great and terrible. It’s also a dictatorship where all the residents have super powers. People can teleport, fly, churn out convincing android minions, disguise themselves perfectly, and coordinate telepathically.

How do you deal with this? What’s a fair way to govern a place where it’s hard to tell a robot minion from a real person, and people can assume new identities at will?

Thankfully, MIT Tech Review allowed me to ask and answer that question in a fancy publication!

Here’s the full article: How to save our social media by treating it like a city

Thank you to my Berkman fellow friends for helping me edit and polish it. Thank you also to a bunch of other friends (and family) too. It took months, and was a team effort.

Here’s my tweet announcing it:

Some quick points if you’re in a hurry:

  • Social media is like a new kind of city. There are good parts and bad parts. Right now, it’s a city of atomic supermen — people have tons of powers that they don’t really have in the physical world.
  • Our rules, norms, and intuitions right assume that you *can’t*, for example, teleport.
  • Eventually, we’re going to figure out the rules and norms that work really well for that kind of world. For now, we’re mostly stuck with the norms we’ve evolved till today.
  • So let’s change the physics of the city to make the residents a little less superpowered.
  • Make it harder to make fake accounts. Make new accounts have to prove themselves with a “driving test” before they have access to the most abuseable features. Put stringent rate limits on behavior that could be used for evil
  • Notice that none of this involves looking at *content* — if we design our online cities well, with speed bumps and parks and gardens and better physics, we can lessen the need for content moderation. This is the alternative to “censorship”.
  • Much, possibly most, of the integrity problem on platforms is spam of one sort or another. We know how to fight spam.
  • Now to the next point: corporate behavior. You can create an amazing set of rules for your platform. But they amount to less than a hill of beans if you don’t enforce them. And enforcing unevenly is arguably worse than not enforcing at all.
  • If you try to fix your system, perhaps by fixing a bug that allowed spammy behavior — there will be entities that lose. The ones that were benefitting from the loophole. Don’t let them stop you by loudly complaining — otherwise you can never fix things!
  • And now to the biggest point: listen to integrity workers. My coworkers and I had actual jobs where we tried to fix the problem. We are steeped in this. We know tons of possible solutions. We study details of how to fix it. We don’t always win internal battles, of course.
  • But we exist. Talk to us. Other integrity workers have their own frameworks that are equally or more insightful. They’re wonderful people. Help us — help them — do their jobs and win the arguments inside companies.
  • PS — Join the Integrity Institute.

Categories
Misc

On the Tech Policy Press podcast

I forgot to mention this a while ago: Jeff and I were on a second fancy podcast when we launched. This time — Tech Policy Press with Justin Hendrix.

It was fun! Justin really understands these issues and asks good questions.

Plus, as a bonus, Aviv was brought on for part two. Worlds collide.

Categories
Misc

I’ll be on a panel in NYU on Dec 15th

Update: It went great! Here’s the recap link to watch it and get a summary

Here’s what the recap said about my part:

As a former Facebook employee, Sahar Massachi stressed how the organizational dynamics inside social media companies influence their products. For example, to increase profit, Facebook optimizes for metrics like growth and engagement, which often tend to fuel harmful content. Although platforms have integrity workers to help mitigate these harms, the focus on engagement often undercuts their efforts. Only by changing the incentives, he said, can we change how social media companies approach harm on their platforms. Massachi co-founded the Integrity Institute to build a community of integrity workers to support the public, policymakers, academics, journalists, and social media companies themselves as they try to solve the problems posed by social media.


So, as part of my work with the Integrity Institute, I get to be on a fancy panel.

Wanna come?

Here are the details, copied from the website:

Reducing Harm on Social Media: Research & Design Ideas

Wednesday, December 15, 2021  |  3:00 – 4:15pm ET

When social media platforms first launched nearly two decades ago, they were seen as a force for good – a way to connect with family and friends, learn and explore new ideas, and engage with social and political movements. Yet, as the Facebook Papers and other research have documented, these same platforms have become vectors of misinformation, hate speech, and polarization.

With attention around social media’s impact on society at an all-time high, this event gathers researchers and practitioners from across the academic, policy, and tech communities to discuss various approaches and interventions to make social media a safer and more civil place.

Panelists

  • Jane Lytvynenko, Senior Research Fellow, Technology and Social Change Project, Harvard Kennedy School’s Shorenstein Center on Media, Politics and Public Policy (moderator)
  • Niousha Roshani, Deputy Director, Content Policy & Society Lab, Stanford University’s Program on Democracy and the Internet
  • Rebekah Tromble, Director, Institute for Data, Democracy & Politics, George Washington University
  • Joshua A. Tucker, Co-Director, New York University’s Center for Social Media and Politics
  • Sahar Massachi, Co-Founder and Executive Director, Integrity Institute
Categories
Misc

How to fix social media without resorting to widespread censorship

A little while ago, I made a big presentation at Berkman: Governing the Social Media City. In conjunction with the commanding Kathy Pham, I laid out some ideas for how I think about “fixing social media” by way of the metaphor of a city. Importantly, this means putting less weight on content moderation, and thinking a lot more about design.

It’s somewhat a guide to a few of my specific ideas, and also a primer on some of the ways that people in Integrity think about these problems.

Here’s the link. I’d love to know what you think.

Categories
Misc

A meta-proposal for Twitter’s bluesky project

My first-ever submission to SSRN was a success! Recently, I’ve gotten an email every day telling me that A meta-proposal for Twitter’s bluesky project is on the top-ten downloads for a ton of journals.

Officially I’m a co-author in the top 10 downloads in a bunch of SSRN topics

Namely: CompSciRN Subject Matter eJournals, CompSciRN: Other Web Technology (Topic), Computer Science Research Network, InfoSciRN Subject Matter eJournals, InfoSciRN: Information Architecture (Topic), InfoSciRN: Web Design & Development (Sub-Topic), Information & Library Science Research Network, Libraries & Information Technology eJournal and Web Technology eJournal.

This is a little less impressive than it sounds. But I’m getting a little ahead of myself. Here’s the story:

How did this all happen?

As a Berkman fellow, the main thing one seems to do is go to recurring meetings for a range of working groups. Jad Esber, one of my esteemed colleagues, got the idea and invitation to give a proposal to Twitter on their Bluesky project. He rounded up a bunch of us, and together we spent 5-6 meetings going over parts of what he called a “meta-proposal” — our guide on how to review the other different proposals coming in.

Jad is a wonderful person, and I learned some project management tips just from being part of this process. Getting a fair-sized collection of people to agree on a document, quickly, is difficult! As far as I remember, he did it like so:

  • The first meeting is to scope out different ideas people have about what they want to say.
  • Jad then writes excellent notes and combines ideas into a manageable number of topics.
  • Each meeting after this includes just the subset of the original crew who feel like they have something to contribute.
  • Jad, who has taken good notes throughout these meetings, polishes them up a bit, then turns it into a paper.

It was easy! It was so nice. And I got to work with people I really enjoy, including but not limited to Crystal Lee or Tom Zick

What the paper argues

The paper contains a bunch of ideas and warnings for a hypothetical new, decentralized social network. There are three big pillars: discover & curation, moderation, and business model. It’s quite short, so I recommend you just read all of it — it is barely 5 pages long.

I do care quite a bit about integrity issues (people often call them issues of “moderation”, which is wrong! More on this in a different post later). So I wanted to highlight this a bit.

Sidenote — what is integrity? Shorthand it to “hate speech, harassment, misinformation and other harms”, or “the problems of social media that come from users doing bad things to other users”.

Regarding curation: The most subtle proposal in here is around identifying the “idea neighborhoods” that someone might be hanging out in. (The paper calls them echo chambers). Why? Because “neighborhoods” are an important building block in identifying and fighting targeted harassment. If you know which neighborhood someone normally spends time in, you can be appropriately skeptical of them in times of stress. You can see a basic version of this in action on Reddit: if a certain post in /r/TwoXChromosomes gets a spike in harassing comments, it was pretty easy to block people who recently posted or commented in /r/mensrights.

(This is also fleshed out a bit in the moderation section as well)

On moderation: I’m tempted to block quote the whole thing. It’s all so clear, important, and succinct. And the key ideas to me are in the “friction” section, which is only 3 paragraphs. Summarizing it would take just as long as quoting. Okay, I can’t help myself. Here’s the section on friction (and a little preamble).

The role of moderation isn’t just restricting bad words or racist content. In designing the protocol and reviewing proposals, the conversation around moderation should center around restricting harassment & harm.

In considering the topic, the conversation should be framed under macro norms which are universal to the protocol; meso norms that are shared across certain clients of the protocol; and micro norms that are specific to a specific client.

Friction

It is well documented that our current systems that rely on the virality of user-generated content end up amplifying harmful content – and there is only so much that moderation efforts we tack on can do to mitigate this. In reviewing BlueSky proposals, we must engage with the question of virality and amplification and whether the protocol design avoids this.

Among the beauties and challenges of free flowing online space is the lack of physical boundaries. Traversing “geographies” by jumping from one conversation to another presents no restrictions. However, from a bad actor perspective, this presents an opportunity to scale harassment efforts and disrupt many events at once. Bluesky is an opportunity to “bring in more physics”, designing in friction on the protocol-level as a proactive way to avoid downstream moderation issues. Without getting into the complex issue of identity, increasing the cost of creating a new account, including introducing a monetary cost to start a new account, might be effective.

Enabling users to see which “neighborhood” other users are coming from could help users identify a provocateur and take action themselves. In addition to helping avoid brigading, ways of visibly ‘tagging’ users could help identify “sock-puppet accounts” and make bots easily identifiable. However, visibly tagging users could present the risk of short-circuiting judgments, and so the system should also present opportunities to identify any cross-cutting cleavages – for example by highlighting shared interests between users.

I’d say I couldn’t put it better myself, but, uh, there’s a reason for that. (That is, I feel a lot of ownership of it).