I’m often asked about the controversy over Google’s role in projects for the US Department of Defense. Concerned over an artificial intelligence effort called Project Maven, 5,000 of the firm’s employees declared in a letter last year to CEO Sundar Pichai that “Google should not be in the business of war.” Two months later, Google decided not to renew the contract. I did not agree with the protest letter, nor with management’s initial withdrawal. But I was pleased to hear Pichai tell Congress a few months later: “I am proud to say we do work, and we will continue to work, with the [US] government to keep our country safe and secure.”
At a time when other tech companies are facing internal pressure over their work for the Defense Department — and when young engineers and computer scientists are sincerely grappling with these tough questions — I want to share with them my perspective as both a scientist and former secretary of defense.
As a scientist by background, I share your commitment to ensuring that technology is used for moral ends. You’re very much in the tradition of the Manhattan Project scientists who created that iconic “disruptive” technology: atomic weapons.
These physicists and public servants were proud of their invention because it saved lives by bringing a swift end to World War II. It then deterred another, even more destructive war between superpowers. But they also assumed responsibility for the terrible dangers of nuclear war. So they worked to reduce this risk by developing safety locks on bombs, effective command-and-control systems, arms control and nonproliferation regimes, and systems for missile defense and civil defense.
Several of these scientists became my mentors. Their example informed my earliest work in the Pentagon as a physicist helping to shape nuclear programs, and later leading the Nunn-Lugar effort that safely denuclearized the former Soviet states.
Unfortunately, the tech world and national defense often seem at odds today. This mistrust is understandable, but it’s not sustainable — and it’s not good for America. Let me share why I think your engagement with the DOD is important.
First, while national defense may not be the main work you do, it’s an inescapable necessity. Shouldn’t people like you, who combine expertise with commitment to moral values, shape this tough arena? AI is an increasingly important military tool. Will it be crude, indiscriminate, and needlessly destructive? Or will it be controlled, precise, and designed to follow US laws and international norms? Refusing to help design these systems means ceding the assignment to others who may not share your skill or moral center.
In 2012, I issued the Pentagon’s first policy regarding AI, which established rules for its ethical deployment. My regulations, which are still in force today, require human involvement in any decision to use lethal force.
You will appreciate the technical complexity this involves. It does not mandate a “person in the loop” in a literal sense, since this is infeasible even with today’s simple computer-aided weapons. A guided missile homes in on its target with rapid recalculations from flight data. A commander can’t check these calculations in real time. Instead, avoidance of error is designed into the weapon and checked during rigorous testing, and launch decisions are then made by trained personnel. Everyone is held responsible and accountable in a transparent investigation of any error.
The AI-aided weapons of the future will intensify this challenge, making it more complex to determine how and why a firing decision was made. A reasonable level of traceability — strong enough to satisfy the vital standard of human responsibility — must be designed into those algorithms. The integrity of the huge underlying data sets must also be checked. This is complex work, and it takes specialists like you to ensure it’s done right.
Second, remember that Google, like other global companies, works in and for adversarial nations such as China. It’s ironic that, shortly after the Project Maven decision, a leaked report revealed that Google had been secretly working on the Dragonfly project, a search engine compliant with China’s censorship. The company announced plans to end its involvement, but let’s not kid ourselves about the deeper issue: Working in and for China effectively means cooperating with the People’s Liberation Army. Are you really more morally concerned about working with DOD than with the PLA? A glance at China’s human rights record should make clear that strengthening Beijing’s hand is not a formula for making a better world.
Third, and perhaps most important, Google itself and most of your colleagues are American. Your way of life, the democratic institutions that empower you, the laws that enable the operations and profits of the corporation, and even your very survival rely on the protection of the United States. Surely you have a responsibility to contribute as best you can to the shared project of defending the country that has given Google so much.
I applaud you and your colleagues for taking seriously the moral aspect of your work. Now I urge you to think more broadly about it, and to get fully engaged in the work it takes to make our world safer, freer, and more peaceful. America’s military brings our values as well as our power to the battlefield. All of us must work to ensure that will always be the case.
Former secretary of defense Ash Carter is the director of Harvard Kennedy School’s Belfer Center for Science and International Affairs and the author of “Inside the Five-Sided Box: Lessons from a Lifetime of Leadership in the Pentagon.”