fb-pixelReal ethics for artificial intelligence - The Boston Globe Skip to main content

Real ethics for artificial intelligence

SAN FRANCISCO — For years, science-fiction moviemakers have been making us fear the bad things that artificially intelligent machines might do to their human creators. But for the next decade or two, our biggest concern is more likely to be that robots will take away our jobs or bump into us on the highway.

Now five of the world’s largest tech companies are trying to create a standard of ethics around the creation of artificial intelligence. While science fiction has focused on the existential threat of AI to humans, researchers at Google’s parent company, Alphabet, and those from Amazon, Facebook, IBM, and Microsoft have been meeting to discuss more tangible issues, such as the impact of AI on jobs, transportation, and even warfare.


Tech companies have long overpromised what artificially intelligent machines can do. In recent years, however, the AI field has made rapid advances in a range of areas, from self-driving cars and machines that understand speech, like Amazon’s Echo device, to a new generation of weapons systems that threaten to automate combat.

The specifics of what the industry group will do or say — even its name — have yet to be hashed out. But the basic intention is clear: to ensure that AI research is focused on benefiting people, not hurting them, according to four people involved in the creation of the industry partnership who are not authorized to speak about it publicly.

The importance of the industry effort is underscored in a report issued Thursday by a Stanford University group funded by Eric Horvitz, a Microsoft researcher who is one of the executives in the industry discussions. The Stanford project, called the One Hundred Year Study on Artificial Intelligence, lays out a plan to produce a detailed report on the impact of AI on society every five years for the next century.


One main concern for people in the tech industry would be if regulators jumped in to create rules around their AI work. So they are trying to create a framework for a self-policing organization, though it is not clear yet how that will function.

“We’re not saying that there should be no regulation,” said Peter Stone, a computer scientist at the University of Texas at Austin and one of the authors of the Stanford report. “We’re saying that there is a right way and a wrong way.”

While the tech industry is known for being competitive, there have been instances when companies have worked together when it was in their best interests. In the 1990s, for example, tech companies agreed on a standard method for encrypting e-commerce transactions, laying the groundwork for two decades of growth in Internet business.

The authors of the Stanford report, which is titled “Artificial Intelligence and Life in 2030,” argue that it will be impossible to regulate AI. “The study panel’s consensus is that attempts to regulate AI in general would be misguided, since there is no clear definition of AI (it isn’t any one thing), and the risks and considerations are very different in different domains,” the report says.

One recommendation in the report is to raise the awareness of and expertise about artificial intelligence at all levels of government, Stone said. It also calls for increased public and private spending on AI.

“There is a role for government, and we respect that,” said David Kenny, general manager for IBM’s Watson artificial intelligence division. The challenge, he said, is “a lot of times policies lag the technologies.”


A memorandum is being circulated among the five companies with a tentative plan to announce the new organization in the middle of September. One of the unresolved issues is that Google DeepMind, an Alphabet subsidiary, has asked to participate separately, according to a person involved in the negotiations.

The AI industry group is modeled on a similar human rights effort known as the Global Network Initiative, in which corporations and nongovernmental organizations are focused on freedom of expression and privacy rights, according to someone briefed by the industry organizers but not authorized to speak about it publicly.

Separately, Reid Hoffman, a founder of LinkedIn who has a background in artificial intelligence, is in discussions with the Massachusetts Institute of Technology Media Lab to fund a project exploring the social and economic effects of artificial intelligence.

Both the MIT effort and the industry partnership are trying to link technology advances more closely to social and economic policy issues. The MIT group has been discussing the idea of designing new AI and robotic systems with “society in the loop.”