Intelligence is not ‘artificial’: humans are in charge
Google CEO Sundar Pichai gave a surprising interview recently. Asked by CNN’s Poppy Harlow about a Brookings report predicting that 80 million American jobs would be lost by 2030 because of artificial intelligence, Pichai said, “It’s a bit tough to predict how all of this will play out.” Pichai seemed to say the future is uncertain, so there’s no sense in solving problems that may not occur. He added that Google could deal with any disruption caused by its technology development by “slowing down the pace,” as if Google could manage disruption merely by pacing itself — and that no disruption was imminent.
The term “artificial intelligence” often prompts this kind of hand waving. It has built-in deniability, with no definite meaning, and an uncertain impact always seeming to lie in the future. It also implies the intelligence belongs to machines, as if humans have no control. It distances Google and others from responsibility.
Artificial intelligence is here now — it is the software and hardware that surrounds everyone on earth. Humans are the architects, refining solutions of all kinds to make them perform intelligently. Designing systems that serve useful purposes is the intelligence; the rest is rote. It will be a long time before machines can identify new purposes and adapt solutions to them. For now, and for some time, machines will have considerable human help.
The term “advancing intelligence” might keep technology CEOs more accountable. Replacing artificial with advancing signifies the intelligence is human, not machine, and is guided by people working at technology companies. It widens the scope to the thousands of technologies — collectively intelligent — upon which people are already dependent, and signals that the future is a function of technology companies’ roadmaps by which their employees are (intelligently) building products to serve people in a multitude of ways. It also emphasizes advancement — social utility and its impact, not only the apparent aptitude of the machine.
Google’s CEO, then, could have acknowledged that Google is already a world leader in AI (advancing intelligence). AI is not only robots and autonomous vehicles, but information services that extend human intelligence — search, voice-recognition, mapping, news, and video services (YouTube), sharable documents, cloud storage, mobile access (Android), to name a few. With a mission statement “to organize the world’s information and make it universally accessible and useful,” Google has made a large fraction of the world’s information available to people with search, handling 6 billion requests every day.
Pichai could also have boldly declared that disruption is already happening. A crisis of public mistrust in Internet technology providers, including Google, is in full swing, especially around the lifeblood of the advancing future: data and information.
The public’s tolerance for data collection, or electronic surveillance, is reaching a limit — triggered, notably, by the failure of technology companies to protect them. People are calling for protection of their privacy; safety from misuse of their data and the services built upon that data; and a guarantee of cybersecurity. Without trust in technology companies to protect them, people should fear the future.
Pichai should act on his word and take charge of a disruptive crisis that has already arrived. The public needs a plan for addressing their mistrust of the Internet as a data and communication platform — the result of advancing intelligence of which Google is at the center.
Google, Facebook, Microsoft and others must take leadership now in innovating ways to assure protection, safety, and security of the Internet’s most precious cargo, information and data. In doing so, they would build trust that will become capital in the future — by investing, for example, in tools that provide data and algorithmic transparency; credentialing of data purchasers; and querying the source of any information; and by a steady investment in public education.
All major technology companies should see a duty — moral and fiduciary — to act now. Their share value rests on data, information, and the safety and well-being of the billions of people who have grown dependent upon them. And given the stakes to citizens, the public will need regulation to enforce it, with or without voluntary corporate action.
By investing in technologies of trust today, major technology companies can demonstrate they accept responsibility for advancing intelligence safely, attentive to public concern. Then as other disruptions materialize — or, better yet, before they do — the public may trust them with their well-being.
Steve Johnson was an early Internet pioneer, holding an image compression patent that enabled the first online streaming media deployed by America Online, in 1994. He is now a senior fellow at the Mossavar-Rahmani Center for Business and Government, Harvard Kennedy School.