This Is So Much Bigger Than Facebook

Data misuse is a feature, not a bug—and it’s plaguing our entire culture.

Mark Zuckerberg on a stage being photographed and lit by spotlights
Manu Fernadez / AP

After five days of silence, Mark Zuckerberg finally acknowledged the massive data compromise that allowed Cambridge Analytica to obtain extensive psychographic information about 50 million Facebook users. His statement, which acknowledged that Facebook had made mistakes in responding to the situation, wasn’t much of an apology—Zuckerberg and Facebook have repeatedly demonstrated they seem to have a hard time saying they’re sorry.

For me, Zuckerberg’s statement fell short in a very specific way: He’s treating the Cambridge Analytica breach as a bad-actor problem when it’s actually a known bug.

In the 17-months-long conversation Americans have been having about social media’s effects on democracy, two distinct sets of problems have emerged. The ones getting the most attention are bad-actor problems—where someone breaks the rules and manipulates a social-media system for their own nefarious ends. Macedonian teenagers create sensational and false content to profit from online ad sales. Disinformation experts plan rallies and counterrallies, calling Americans into the streets to scream at each other. Botnets amplify posts and hashtags, building the appearance of momentum behind online campaigns like #releasethememo. Such problems are the charismatic megafauna of social-media dysfunction. They’re fascinating to watch and fun to study—who wouldn’t be intrigued by the team of Russians in St. Petersburg who pretended to be Black Lives Matter activists and anti-Clinton fanatics in order to add chaos to the presidential election in the United States? Charismatic megafauna may be the things that attract all the attention—when really there are smaller organisms, some invisible to the naked eye, that can dramatically shift the health of an entire ecosystem.

Known bugs are the set of problems with social media that aren’t the result of Russian agents, enterprising Macedonians, or even Steve Bannon, but seem to simply come with the territory of building a social network. People are mean online, and bullying, harassment, and mob behavior make online spaces unusable for many people. People tend to get stuck in cocoons of unchallenging, ideologically compatible information online, whether these are “filter bubbles" created by algorithms, or simply echo chambers built through homophily and people’s friendships with “birds of a feather.” Conspiracy theories thrive online, and searching for information can quickly lead to extreme and disturbing content.

The Cambridge Analytica breach is a known bug in two senses. Aleksandr Kogan, the Cambridge University researcher who built a quiz to collect data on tens of millions of people, didn’t break into Facebook’s servers and steal data. He used the Facebook Graph API, which until April 2015 allowed people to build apps that harvested data both from people who chose to use the app, and from their Facebook friends. As the media scholar Jonathan Albright put it, “The ability to obtain unusually rich info about users’ friends—is due to the design and functionality of Facebook’s Graph API. Importantly, the vast majority of problems that have arisen as a result of this integration were meant to be ‘features, not bugs.’”

In his non-apology, Zuckerberg claimed Facebook had already taken the most “important steps a few years ago in 2014 to prevent bad actors from accessing people’s information.” But changing the API Kogan used to collect this data is only a small part of a much bigger story.

To be clear, I believe Kogan acted unethically in allegedly collecting this data in the first place, and that giving this data to Cambridge Analytica was an unforgivable breach of research ethics. But Kogan was able to do this because Facebook made it possible, not just for him, but for anyone building apps using the Graph API. When Kogan claims he’s being made a scapegoat by both Cambridge Analytica and Facebook, he has a strong case: Selling data to Cambridge Analytica is wrong, sure, but Facebook knew that people like Kogan could access the data of millions of users. That’s precisely the functionality Facebook advertised to app developers.

Speaking with Laurie Segall on CNN this week, Zuckerberg emphasized that Facebook would investigate other app makers to see if anyone else was selling psychographic data they’ve collected through the Graph API. But Zuck didn’t mention that Facebook’s business model is based on collecting this demographic and psychographic information and selling the ability to target ads to people using this data about them.

This is a known bug not just for Facebook and other social networks, but for the vast majority of the contemporary web. Like Facebook, Google develops profiles of its users, with information from people’s private searches and tools like Gmail and office applications, to help advertisers target messages to them. As you read this article on The Atlantic, roughly three dozen ad trackers are watching you, adding your interest in this story to profiles they maintain on your online behavior. (If you want to know more about who’s watching you, download Ghostery, a browser extension that tracks and can block these “third-party” trackers.) The Atlantic is not unusual. Most ad-supported websites track their users, as part of agreements that seek to make their ad inventory more valuable.

I’ve referred to this bargain, in which people get content and services for free in exchange for having persuasive messages psychographically targeted to them, as the “original sin” of the internet. It’s a dangerous and socially corrosive business model that puts internet users under constant surveillance and continually pulls our attention from the tasks we want to do online toward the people paying to hijack our attention. It’s a terrible model that survives only because we haven’t found another way to reliably support most internet content and services—including getting individuals to pay for the things they claim to value.

We become aware of how uncomfortable this model is when Steve Bannon and Cambridge Analytica develop personality profiles of us so they can tailor persuasive messages to our specific personal quirks, but that’s exactly what any competent advertiser is doing, every day, on nearly every site online. If that makes you feel uncomfortable: Good, it should. But the problem is way bigger than Facebook. This is a known bug not just with social networks, but with the contemporary, ad-supported web as a whole.

It’s relatively easy for social networks to address bad-actor problems. For the most part, social networks would be better off without bots artificially promoting posts or fabricated news capturing clicks. Platform interests are aligned with society’s interests as a whole in fighting bad actors.

It’s different with known bugs.

When platforms address them, they run the risk of breaking their business models. It’s hard for YouTube to fix a recommendation engine that leads us toward conspiracy-theory videos without breaking a system that encourages us to surf from one popular video to the next, racking up ad views in the process. It’s hard—though not impossible—to fix harassment and bullying on Twitter when many of Twitter’s most engaged users are opinionated and sharp-tongued and their arguments are some of the most compelling content on the service. And it’s impossible for Facebook to protect us from manipulative advertising targeted to our psychographic profile when their business model is built on selling this particular form of persuasion.

Platforms won’t fix known bugs on their own, not without external pressure. In his interview with Segall, Zuckerberg acknowledged that it might be time for regulation of Facebook, suggesting standards that force transparency around who’s paying for ads. We should push Facebook, and the web as a whole, to do much more to address this known bug, and others.

Users of the internet have been forced into a bargain they had no hand in negotiating: You get the services you want, and platforms get the data they need. We need the right to opt out of this bargain, paying for services like Facebook or YouTube in exchange for verifiable assurances that our usage isn’t being tracked and that our behavioral data is not being sold. We need an ecosystem that encourages competitors to existing social-media platforms, which means ensuring a right to export data from existing social networks and new software that lets us experiment with new services while maintaining contacts on existing ones. We need to treat personally identifiable information less like a resource to be exploited and more like toxic waste, which must be carefully managed, as Maciej Ceglowski has proposed. This may require a digital EPA, as Franklin Foer, Paul Ford, and others have argued—a prospect that would be more appealing if the actual EPA wasn’t currently being gutted.

Critically, we need the scholars, philanthropists, and policy makers who’ve woken up to the problems of bad actors on the internet to pay attention to the known bugs as well. Addressing the effects of echo chambers, polarization, and psychographic persuasion may not be as sexy as unmasking Russian botnets, but the botnets are a merely a symptom of a much larger problem.

Tribalism, manipulation, and misinformation are well-established forces in American politics, all predating the web. But something fundamental has changed. Never before have we had the technological infrastructure to support the weaponization of emotion on a global scale. The people who built this infrastructure have a moral obligation to own up to what they’ve done. On some level, in some way, Zuckerberg must know that. “We have a responsibility to protect your data,” he said in his statement, “and if we can’t then we don’t deserve to serve you.” Nor does he deserve our support if he and his peers don’t address the known bugs that are corroding democracy.

Ethan Zuckerman is a professor and directs the Initiative for Digital Infrastructure at the University of Massachusetts at Amherst.