fb-pixel
EDITORIAL

Saving US elections from Facebook

Asking Facebook to moderate content is a fine idea. But saving democracy from disinformation requires more fundamental change.

Andrew Harrer/Bloomberg

With the 2020 election just eight months away, a big problem remains unresolved: what to do about falsehoods on Facebook.

Lawmakers and the public have good reason to scrutinize the social media company for its outsized role in spreading political disinformation from foreign and domestic sources. And if the company can’t quickly take stronger measures than it has to date, Congress should constrain the data-targeting practices that make Facebook a particularly useful tool for electoral manipulation.

Even Facebook executives admit that disinformation sprees on its platforms have poisoned election campaigns in the United States, the Philippines, and Brazil, among other countries, and fueled the Myanmar government’s massacres of Rohingya Muslims. In response, the company has cracked down on “bots” and other fake accounts and instituted new methods for verifying the legitimacy of advertisers. It has funded fact-checks on dubious content. It’s also setting up an oversight board that will weigh in on difficult questions about how to moderate content. This week it wisely removed ads in which the Trump campaign misled people about the census.

Even so, the site remains open to disruptive manipulation. For example, Facebook says it will take down videos that use artificial intelligence software to mislead people “into thinking that a subject of the video said words that they did not actually say.” But it will still allow lower-tech methods of fakery. Right before Election Day, your Facebook feed might show you a candidate appearing to be ill or in the company of shady characters.

Advertisement



Given the prospect of hoaxes like that, Senator Michael Bennet of Colorado has given Facebook CEO Mark Zuckerberg until April 1 to explain whether the company will “adopt stronger policies to limit abuses of its platforms.” Bennet recently called the company’s efforts to reduce disinformation worldwide “inadequate.”

Advertisement



Renee DiResta, research manager at the Stanford Internet Observatory, says three attributes make an Internet platform especially vulnerable to disinformation: (1) a very large audience; (2) personalized targeting functions, which help propaganda and other scams be seen by the people likeliest to be swayed; and (3) algorithms that can be gamed to help bad-faith content go viral, which tends to strengthen the belief that it’s true.

YouTube and Twitter also have these qualities, but Facebook is in a different league. It offers advertisers the most sophisticated ways of slicing and dicing the audience, “showing each of us a different version of the truth,” in the words of Facebook’s former head of global elections integrity.

This makes it especially dangerous that Facebook lets political candidates run ads with obvious falsehoods. Broadcast TV networks, unlike newspapers and cable TV channels, must allow such ads under federal law governing the free airwaves. But at least that happens in the open, allowing an opponent to respond. Facebook makes it possible to target ads to as few as 100 people. A candidate maligned in that format might never notice or have a chance to respond.

Simply demanding that Facebook remove more misleading material will go only so far, in part because the company’s leadership is terrified of upsetting partisans with the power to regulate it. And Zuckerberg is right to point out that there isn’t a satisfying way to clearly define the “political” content worth taking down entirely.

Advertisement



A cleaner and yet more comprehensive change would be to limit the targeting of political ads, making it harder to sneak deceptive material into the information ecosystem. Google has taken that step. Twitter, which doesn’t allow paid ads by political candidates anymore, still allows ads on political “causes” such as climate change as long as they don’t run solely on a granular level — they can be targeted to people based on their state but not on their ZIP code.

Even Facebook employees — at least 250 of them — advocated that change in a letter to Zuckerberg in October. Micro-targeting of political ads “allows politicians to weaponize our platform," they wrote. Just as Facebook restricts the targeting of ads for housing and financial services because of the history of discrimination in such venues, “we should extend similar restrictions to political advertising," they said.

Keep in mind that Facebook didn’t unwittingly find itself playing an enormous role in the political life of entire nations. It set out to use its algorithms for exactly that purpose. In 2008, the social network put up a clickable button that let members signify whether they had voted. On Election Day 2010, Facebook used this button in an experiment on 61 million user accounts and found that people became likelier to vote if they saw that their friends had. That single test, researchers determined, increased nationwide voter turnout by at least 340,000 people.

Advertisement



In 2012, Facebook used its tremendous power to customize the experience of individual users to sway the electorate in another way. Over a three-month span, it put more news stories atop the feeds of 2 million users. This apparently made them likelier to vote as well.

In other words, long before the site energized disinformation, hoaxes, and other political scams in 2016, its leaders knew just how dramatically Facebook can affect elections. That’s why it is long overdue that the company, which has more than 2 billion users worldwide, take more responsibility.

“In general,” a Facebook data scientist said in 2012, “we are committed to being part of the democratic process.” If that’s still true, Facebook should put a moratorium on micro-targeting political ads now. Congress should also look more closely at the ways that all Internet companies can exploit consumer data, and throw sand in the gears of their algorithmic machines as necessary.


Editorials represent the views of the Boston Globe Editorial Board. Follow us on Twitter at @GlobeOpinion.