fb-pixelBeware corporate ‘machinewashing’ of AI - The Boston Globe Skip to main content
Opinion | Nick Obradovich, William Powers, Manuel Cebrian, and Iyad Rahwan

Beware corporate ‘machinewashing’ of AI

It was revealed in March of last year that the political data-mining firm swept up the personal information of millions of Facebook users for the purpose of manipulating national elections.Mark Lennihan/AP Photo

Back in the late 1960s and early ’70s, when the fossil fuel industry and other corporate polluters came under fire for harming the environment, the polluters launched massive ad campaigns portraying themselves as friends of the earth. This cynical practice was later dubbed “greenwashing.”

Today we may be witnessing a new kind of greenwashing in the technology sector. Addressing widespread concerns about the pernicious downsides of artificial intelligence (AI) — robots taking jobs, fatal autonomous-vehicle crashes, racial bias in criminal sentencing, the ugly polarization of the 2018 election — tech giants are working hard to assure us of their good intentions surrounding AI. But some of their public relations campaigns are creating the surface illusion of positive change without the verifiable reality. Call it “machinewashing.”


Last year, Google posted a list of seven AI principles, beginning with “Be Socially Beneficial.” Microsoft published “The Future Computed,’’ a book calling for “a human-centered approach to AI that reflects timeless values,” and launched a program to support developers working to meet humanitarian needs. Germany-based SAP, one of the world’s largest software companies, now has an AI ethics advisory panel that includes a theologian, a political scientist, and a bioethicist.

On seeing these initiatives, the natural response is to applaud. If the most powerful tech companies are on the case, surely these problems will soon be solved.

Or will they? Facebook’s response to the intense public scrutiny it has received since the election has been to treat it as a public-relations challenge. After a sell-off of its stock in the wake of the Cambridge Analytica scandal in early 2017, Facebook spent $1.7 million on an ad campaign in subway stations and trains in the Boston area. Its slogan was “The best part of Facebook isn’t on Facebook,” and the accompanying images showed people engaged in healthy, fun offline activities such as hiking and dancing. The message: Facebook is all about making our world a better, more harmonious place. Yet, as The New York Times recently reported, the company had also hired lobbyists and opposition-research firms “to combat Facebook’s critics, shift public anger toward rival companies, and ward off damaging regulation.”


As experts on the societal effects and ethics of AI — a term that broadly refers to all technologies that use decision-making algorithms — we are keenly aware of how much work remains to be done in understanding how this new form of intelligence works once it’s released in the real world.

The tech industry has a long history of humanistic intentions and pronouncements — and in fact is responsible for all kinds of progress. Yet somehow we’ve gotten into the most serious AI crisis since the dawn of these technologies. As with climate change and environmental degradation, if we leave oversight of intelligent machines solely to the companies that build and sell the technologies, we’ll see many more crises in the coming decades.

Why? In a word, capitalism. The tech economy is driven by massive complex enterprises that exist to maximize short-term profits. High-minded rhetoric notwithstanding, serving the best interests of society is not the industry’s primary objective.

To compound the problem, the baleful effects of AI are often rooted in the very algorithms that drive many tech companies’ profit streams. Economists call such societal costs “negative externalities.” A key component of a negative externality is that the selling or buying of the product itself doesn’t price in the costs borne by others in society as a result of this transaction.


For instance, if people are clicking like crazy on ideologically divisive content served up by personalized algorithms designed to manipulate emotions, it may make both the social media company and the individual user happy in the moment. But it’s inarguably a bad thing for the world. However, those clicks equal money, and when you’re answering to impatient shareholders, greed has an edge over principle.

Modern industry is highly skilled at concealing its true agenda with happy-talk. Greenwashing began when long-simmering concerns about pollution and other environmental threats finally gained prominence following the publication of “Silent Spring,’’ the landmark book by Rachel Carson. It’s too soon to say if most of the humanistic rhetoric and initiatives flowing from the tech industry really deserves to be called machinewashing. But as long as the profits keep flowing, the tech giants have little incentive to change the values and practices that drove their success.

Environmentalists learned long ago that if you want to know where big corporations are headed, don’t follow the rhetoric — follow the money. Public grandstanding is much cheaper for corporations than implementing costly but socially beneficial solutions. Executives who make idealistic public pronouncements often act very differently when they’re behind closed doors choosing between profits and the public good.

Imagine if the titans of 1980s Wall Street had brought in ethicists to advise them on humane investment practices. Imparting moral wisdom on the Gordon Gekko generation might have sparked some great after-hours conversations over martinis. But do we seriously believe it would have prevented the systemic risk that Wall Street actually created and the price we all wound up paying for it?

AI, which is still emerging, is simply too powerful a force to entrust entirely to self-interested businesses. So beware of machinewashing. The only way to ensure that today’s technologies evolve in a healthy direction is through thoughtful, truly independent oversight. Some government regulation seems inevitable, as Apple’s Tim Cook and others now concede. But since aggressive regulation tends to stifle innovation, there should also be a role for nongovernmental oversight, perhaps through independent, transparent standards created for this purpose.


There will inevitably be some who view such oversight as a threat to the world’s most vibrant industry. But the environmental oversight that emerged in the last half-century — with both governmental and third-party elements — didn’t bankrupt the affected industries. It changed how they did business (though regrettably, not enough), helped clean up our air and water, and gave birth to a slew of new, genuinely green industries.

AI is the new framework of our lives. We need to ensure it’s a safe, human-positive framework, from top to bottom. If we leave it solely to the corporations, we’ll never get there.

The authors work in the Scalable Cooperation research group at the MIT Media Lab, where Nick Obradovich is a research scientist, William Powers is head of strategic partnerships, Manuel Cebrian is a research scientist, and Iyad Rahwan is an associate professor.