Opinion

DANTE RAMOS

Facebook needs more human eyeballs

FILE - In this April 12, 2016, file photo, Facebook CEO Mark Zuckerberg speaks during the keynote address at the F8 Facebook Developer Conference in San Francisco. Facebook has unwittingly allowed groups backed by the Russian government to target users with ads. That’s after it took months to acknowledge its outsized role in influencing the U.S. election by allowing for the spread of fake news. Now it is under siege, facing questions from lawmakers and others seeking to rein in its enormous power and demand more transparency. (AP Photo/Eric Risberg, File)
AP Photo/Eric Risberg
Mark Zuckerberg gives the keynote address at the F8 Facebook Developer Conference in San Francisco.

On their own, computers don’t know that it’s bad to treat racists or anti-Semites as just another niche marketing demographic. But Facebook and Google aren’t deploying enough human eyeballs to prevent the misuse of their ad systems. Instead, they’ve been making money off it.

Earlier this month, ProPublica’s Julia Angwin and two colleagues reported that Facebook was letting advertisers single out people interested in the Nazi Party and in subjects like “Jew hater,” “how to burn Jews,” and “history of ‘why Jews ruin the world.’” Using Facebook’s online ad-buying system, ProPublica paid the social-networking goliath $30 to target an ad to people in those categories; the nonprofit news organization’s (innocuous) spot was approved in 15 minutes.

The next day, Alex Kantrowitz at BuzzFeed reported that Google allows advertisers to zero in on people who type search terms such as “blacks destroy everything.” And not only that; the system suggested similar phrases — “blacks ruin everything,” “black people ruin neighborhoods” — that its clients might also want to buy ads against.

Advertisement

Upon learning about the problem, Facebook and Google limited ad sales based on the phrases in question. Facebook similarly vowed to mend its ways last year, when ProPublica found that would-be landlords could keep housing ads from being shown to black and Hispanic users. After disclosing this month that, during the 2016 campaign, “inauthentic” Russian accounts had spent $100,000 or more on Facebook ads — reaching as many as 70 million people — chief security officer Alex Stamos similarly insisted the company was “constantly updating” its “efforts in this area.”

Get Arguable with Jeff Jacoby in your inbox:
From the Globe's must-read columnist, an extra offering each week of opinion and ideas.
Thank you for signing up! Sign up for more newsletters here

Thanks?

We’ve too readily accepted the notion that abuses like these are like the weather: There’s not much anybody can do about them beforehand. We act as if tech companies are powerless to stop circulating dodgy information and running dubious ads — and to stop taking the money associated with them — until somebody points out there’s a problem.

Traditional media outlets don’t get such a pass. Employees of newspapers that carry apartment ads have to parse them for violations of fair-housing laws. Managers at television stations are supposed to verify that political ads don’t contain false or defamatory claims. Federal election laws require the people who produce and air TV campaign ads to disclose who paid for them.

And because these ads are visible to everyone, violations of legal and informal norms reliably create repercussions from readers, advocacy groups, and other advertisers. Human judgment is highly useful. Under old-media norms, Russian trollbots would never approach TV station managers with shadowy ads promoting racial strife — or casually accusing a major-party presidential nominee of murder. (“Killary”? As others have pointed out, the continued existence of Anthony Weiner proves that Hillary Clinton doesn’t have people dispatched for political reasons.)

Advertisement

Facebook and Google, in contrast, have consciously decided to offer ads, via automated sales platforms, to algorithmically identified interest groups. Because of narrow targeting, people who might otherwise spot a misleading ad, and raise a ruckus about it, may never see it in the first place.

And, other than bad PR, tech firms are largely immune from any consequences even if anyone did. With the exceptions of child pornography, material connected with other federal crimes, or stolen intellectual property, tech companies that act as intermediaries aren’t liable for the content they convey. “If they’re just passing along content from someone else, they can’t be held liable,” says Harvard Law School professor Rebecca Tushnet.

In the early days of the Internet, Congress and successive presidential administrations didn’t want to burden young tech firms with matters of social responsibility or regulatory compliance. But today, there’s no danger of smothering Facebook or Google in their infancy. Just in the second quarter of this year, Facebook made $451,205 in revenue per employee and a staggering $188,498 per employee in profits, the tech news website Recode recently reported. By comparison, Walmart — that supposed embodiment of corporate rapacity — netted a mere $1,370.

Tech companies enjoy a halo effect, because they provide Internet users with free, ingenious, and hugely useful tools. But their real business is monetizing the data we give them, and selling ads against content that other people produce. They’ve been able to do so while fobbing off responsibility for whether the material circulating on their platforms is false, libelous, or deeply corrosive to society.

ProPublica is trying to fill the gap. This month, it launched a software tool that will collect political ads from Facebook, giving interested voters a chance to see ads that Facebook’s own algorithms may never show them. It’s telling, though, that the task of monitoring a zillion-dollar tech platform has fallen to a scrappy news nonprofit.

Advertisement

Fortunately, the Federal Election Commission is starting to ask questions. Mark Warner, a Democratic senator from Virginia, may propose legislation stiffening disclosure requirements for ads on social media. Tushnet, the law professor, proposes a gauzier solution: better citizenship. We should teach Internet users not just to read, but how to read well.

To these suggestions I’d add one more thing: Americans shouldn’t just accept inanimate algorithms as the scapegoat for an epidemic of bad information. Ultimately, the people at Facebook and Google need to accept their share of responsibility for how they’re shaping our public discourse — and deploy enough human judgment to keep it from spiraling further downward.

Dante Ramos can be reached at dante.ramos@globe.com. Follow him on Facebook: facebook.com/danteramos or on Twitter: @danteramos.