UNITED NATIONS — It seems like something out of science fiction: swarms of killer robots that hunt down targets on their own and are capable of flying in for the kill without any human signing off.
But it is approaching reality as the United States, China, and a handful of other nations make rapid progress in developing and deploying new technology that has the potential to reshape the nature of warfare by turning life and death decisions over to autonomous drones equipped with artificial intelligence programs.
That prospect is so worrying to many other governments that they are trying to impose legally binding rules through the United Nations on the use of what militaries call lethal autonomous weapons.
Advertisement
“This is really one of the most significant inflection points for humanity,” Alexander Kmentt, Austria’s chief negotiator on the issue, said in an interview. “What’s the role of human beings in the use of force — it’s an absolutely fundamental security issue, a legal issue, and an ethical issue.”
But while the UN is providing a platform for governments to express their concerns, the process seems unlikely to yield substantive legally binding restrictions. The United States, Russia, Australia, Israel, and others have all argued that no new international law is needed for now, while China wants to define any legal limit so narrowly that it would have little practical effect, arms control advocates say.
The result has been to tie the debate up in a procedural knot with little chance of progress on a legally binding mandate anytime soon.
“We do not see that it is really the right time,” Konstantin Vorontsov, the deputy head of the Russian delegation to the UN, told diplomats who were packed into a basement conference room recently at the UN headquarters in New York.
The debate over the risks of AI has drawn attention in recent days with the battle over control of OpenAI, perhaps the world’s leading AI company, whose leaders appeared split over whether the firm is taking sufficient account over the dangers of the technology. And last week, officials from China and the United States discussed a related issue: potential limits on the use of AI in decisions about deploying nuclear weapons.
Advertisement
Against that backdrop, the question of what limits should be placed on the use of lethal autonomous weapons has taken on new urgency, and for now has come down to whether it is enough for the UN simply to adopt nonbinding guidelines, the position supported by the United States.
“The word ‘must’ will be very difficult for our delegation to accept,” Joshua Dorosin, the chief international agreements officer at the State Department, told other negotiators during a debate in May over the language of proposed restrictions.
Dorosin and members of the US delegation, which includes a representative from the Pentagon, have argued that instead of a new international law, the UN should clarify that existing international human rights laws already prohibit nations from using weapons that target civilians or cause a disproportionate amount of harm to them.
Pentagon officials have made it clear they are preparing to expansively deploy autonomous weapons.
Deputy Defense Secretary Kathleen Hicks said this summer the military will “field attritable, autonomous systems at scale of multiple thousands” in the coming two years, saying the push to compete with China’s own investment in advanced weapons necessitates that the United States “leverage platforms that are small, smart, cheap, and many.”
Advertisement
The concept of an autonomous weapon is not entirely new. Land mines — which detonate automatically — have been used since the Civil War. The United States has missile systems that rely on radar sensors to autonomously lock on to and hit targets.
What is changing is the introduction of AI that could give weapons systems the capability to make decisions themselves after taking in and processing information.
The United States has already adopted voluntary policies that set limits on how AI and lethal autonomous weapons will be used, including a Pentagon policy revised this year called “Autonomy in Weapons Systems” and a related State Department “Political Declaration on Responsible Use of Artificial Intelligence and Autonomy,” which it has urged other nations to embrace.
The US policy statements “will enable nations to harness the potential benefits of AI systems in the military domain while encouraging steps that avoid irresponsible, destabilizing, and reckless behavior,” said Bonnie Denise Jenkins, a State Department undersecretary.
The Pentagon policy prohibits the use of any new autonomous weapon or even the development of them unless they have been approved by top Defense Department officials. Such weapons must be operated in a defined geographic area for limited periods. And if the weapons are controlled by AI, military personnel must retain “the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.”
Advertisement
At least initially, human approval will be needed before lethal action is taken, Air Force generals said in interviews.
Thomas X. Hammes, a retired Marine officer who is now a research fellow at the Pentagon’s National Defense University, said in an interview and a recent essay published by the Atlantic Council that it is a “moral imperative that the United States and other democratic nations” build and use autonomous weapons.
He argued that “failing to do so in a major conventional conflict will result in many deaths, both military and civilian, and potentially the loss of the conflict.”
Some arms control advocates and diplomats disagree, arguing that AI-controlled lethal weapons that do not have humans authorizing individual strikes will transform the nature of warfighting by eliminating the direct moral role that humans play in decisions about taking a life.
These AI weapons will sometimes act in unpredictable ways, and they are likely to make mistakes in identifying targets, like driverless cars that have accidents, these critics say.
The new weapons may also make the use of lethal force more likely during wartime, since the military launching them would not be immediately putting its own soldiers at risk, or they could lead to faster escalation, the opponents have argued.
Arms control groups such as the International Committee of the Red Cross and Stop Killer Robots, along with national delegations including Austria, Argentina, New Zealand, Switzerland, and Costa Rica, have proposed a variety of limits.
Some would seek to globally ban lethal autonomous weapons that explicitly target humans. Others would require that these weapons remain under “meaningful human control,” and that they must be used in limited areas for specific amounts of time.
Advertisement
Kmentt, the Austrian diplomat, conceded in an interview that the UN has had trouble enforcing existing treaties that set limits on how wars can be waged. But there is still a need to create a legally binding standard, he said.
“Just because someone will always commit murder, that doesn’t mean that you don’t need legislation to prohibit it,” he said. “What we have at the moment is this whole field is completely unregulated.”
This article originally appeared in The New York Times.