fb-pixel Skip to main content
OPINION | SUSAN HOLCOMB

When tech execs would rather just not know

When Facebook chief operating officer Sheryl Sandberg admonished security chief Alex Stamos for looking into Russian activity on the social-media platform, she was specifically worried that Stamos’s investigation would make the company vulnerable.Jose Luis Magana/AP

Buried in a New York Times report on Facebook last month was a single line that provides an important glimpse into how some corporate managers think.

When Facebook chief operating officer Sheryl Sandberg admonished security chief Alex Stamos for looking into Russian activity on the social-media platform, she was specifically worried that Stamos’s investigation would make the company vulnerable. As the Times story put it: “Looking into the Russian activity without approval, she said, had left the company exposed legally.”

This perspective reflects a certain type of executive thinking. Most large companies have someone in the boardroom who takes on the work of assessing and limiting the company’s legal exposure. If someone at the company opens a Pandora’s box, the company may be held responsible for the consequent evils released upon the world. Risk-averse executives will tend to recommend that employees keep such boxes closed.

Advertisement



Still, blissful ignorance can only go so far for executives at, say, a food-processing firm, an automaker, or a toymaker. If such a company actively avoids even knowing about contamination sources, faulty parts, or choking hazards for months or years at a time, human life could be at risk.

Yet for tech firms whose data-driven products exist mainly in cyberspace, the potential dangers of a see-no-evil strategy are far less tangible, and the pressure to confront a problem such as Russian meddling is therefore less immediate. (Full disclosure: I have worked in the tech industry, including as head of data science for a smartwatch company.)

The irony is that, contrary to Sandberg’s fear of legal exposure, federal law specifically protects Facebook and other platforms from liability for user-generated content — for instance, for messages and postings by Russian-linked accounts. Under Section 230 of the 1996 Communications Decency Act, companies such Facebook are treated merely as hosts for what their users post. As a result, tech platforms do not have the editorial responsibility that traditional publishers do. You can sue a newspaper, radio station, or magazine for defamation, for example, but you can’t (in most cases) sue Facebook.

Advertisement



While this legislation appears to relieve Facebook of the responsibility of managing what outside users do on the company’s site, law professor Jeff Kosseff argues that Section 230 was designed to produce the opposite effect. Kosseff is a professor of cybersecurity law at the US Naval Academy and the author of the forthcoming book “The Twenty-Six Words That Created the Internet,” an analysis of Section 230 and its legacy. By freeing tech platforms from liability, Kosseff notes, the provision allows them to investigate controversial content without fear of legal repercussions.

“Without Section 230,” Kosseff explains, “if you’re a content distributor, you are liable for what you knew or should have known.” Such liability, he argues, “would provide companies with a disincentive for moderating content.” To counter that disincentive, the law is supposed to give companies like Facebook legal cover: They can look into Russian activity without fear of legal repercussions.

The federal law, it appears, wasn’t enough to allay Sandberg’s fears. Before a company’s representatives look into suspicious postings, after all, they don’t know what they’re going to find. Who knows what other problems might be lurking? If users are willing to tolerate a certain amount of harassment, disturbing content, or even propaganda in their social-media ecosystem — and clearly they are — risk-averse company managers might prefer to not to ask too many questions.

Plausible deniability is a common defense in corporate America against legal exposure, and it colors how technology companies operate. For example, when Uber acquired Otto, the self-driving car company founded by Anthony Levandowski, general counsel Salle Yoo ordered an outside report on Otto as part of the due diligence process. This report revealed that Levandowski had possessed thousands of files from competitor Waymo — a red flag that could expose Uber to allegations of intellectual-property infringement. However, Business Insider reported earlier this year that neither Yoo nor Uber CEO Travis Kalanick ever read the report, and Uber acquired Otto in spite of the report’s findings.

Advertisement



Not reading a report can be a strategic choice. By intentionally walling themselves off from the report, top Uber executives kept themselves “from discovering confidential technology from Levandowski and using that information to make business decisions,” as Business Insider put it. Of course, this approach backfired, and the unread report became a pivotal piece of documentation in a lawsuit filed by Waymo — a suit that Uber settled for $245 million earlier this year. Nevertheless, the incident serves as another example of how executives try to gain advantage through claims of ignorance.

Meanwhile, mathematician Cathy O’Neil points out that fears of legal exposure encourage many companies to treat the algorithms that run their businesses as black boxes. O’Neil is the author of the book “Weapons of Math Destruction” and founder of ORCAA, a company that audits algorithms for fairness. In October, O’Neil wrote in Bloomberg that once a company’s general counsel is in the room or on the phone, the conversation shifts from improving an algorithm’s fairness to mitigating the company’s risk. Typical questions, O’Neil says, include: “What if you find a problem with our algorithm that we cannot fix? And what if we someday get sued for that problem and in discovery they figure out that we already knew about it?” Once these questions are on the table, O’Neil wrote, “I never get to the third [phone] call.” This desire for plausible deniability, O’Neil argued, prevents companies from shining a light on how their own algorithms are working.

Advertisement



This problem, unfortunately, is nothing new. Five years ago, when former National Security Agency contractor Edward Snowden revealed that a system called Prism had obtained user data from a variety of tech companies, officials at Microsoft, Yahoo, Facebook, Google, Apple, and Dropbox all released statements denying their involvement with the program. The Guardian reported that several senior executives “insisted that they had no knowledge of Prism or of any similar scheme.” Of Prism, one executive said, “If they are doing this, they are doing it without our knowledge.”

At the time, experts reasoned that these companies had likely intentionally kept these executives in the dark. Scott Cleland, a consultant who has testified before Congress on Internet policy issues told USA Today that the companies in question “would have broadly delegated authority for their company’s NSA compliance to a very small number of individuals.”

Advertisement



As for the rest, they were likely told nothing. “The leadership wants and needs to have reasonable and plausible deniability for times exactly like this,” Cleland explained.

It’s striking that, at a moment when tech giants are collecting vast quantities of data on users, executives manage to know so little about how their own companies are operating.

What would make them shift their approach? Public pressure has a history of forcing change within technology companies. The public’s response to Uber’s series of scandals helped drive the ouster of Kalanick as CEO last year. As for Facebook, the company’s stock price dropped by nearly 10 percent in the days after the Times story broke. The pressure to create value for shareholders may indeed provide a powerful counterweight to executives’ willful ignorance.

Still, relying on public pressure to force tech platforms to change creates a dynamic in which these companies are necessarily reactive. Executives may also need better legal inducements, even beyond Section 230, to investigate and fix problems instead of avoiding any knowledge of them. We can’t expect company managers to act in the public interest out of the goodness of their hearts, and companies should not be incentivized to let problems grow and grow in secret — until they finally explode.


Susan Holcomb is a technology writer.