scorecardresearch Skip to main content
opinion | Stuart Russell

Ban lethal autonomous weapons

Pep Montserrat for the Boston Globe

LETHAL AUTONOMOUS weapons — robots that can select, attack, and destroy targets without human intervention — have been called the third revolution in warfare, after gunpowder and nuclear arms. While some ridicule the notion of killer robots as “science fiction,” more knowledgeable sources such as the British Ministry of Defence say they are “probably feasible now.” We are not talking about cruise missiles or remotely piloted drones, but about, for example, flying robots that search for human beings in a city and eliminate those who appear to meet specified criteria.

For decades, scientists in artificial intelligence have pursued fundamental research on the nature of intelligence, with many benefits for humanity. At the same time, the potential for military applications has been obvious. Two programs of the US Department of Defense — named FLA and CODE — provide hints of what the major powers have in mind. The FLA project will program tiny quadcopters to explore and maneuver unaided at high speed in urban areas and inside buildings. CODE aims to develop teams of autonomous aerial vehicles carrying out “all steps of a strike mission — find, fix, track, target, engage, assess” in situations in which enemy signal-jamming makes communication with a human commander impossible. The manager of the CODE program described the goal as building systems that will behave “as wolves hunt[ing] in coordinated packs.”

Advertisement



The United Nations has held a series of meetings on autonomous weapons, and within a few years there could be an international treaty limiting or banning autonomous weapons, as happened with blinding laser weapons in 1995.

Up to now, the primary technical issue has been whether autonomous weapons can meet the requirements of international humanitarian law, which governs attacks on humans in times of war. The 1949 Geneva Convention requires any attack to satisfy three criteria: military necessity; discrimination between combatants and noncombatants; and proportionality between the value of the military objective and the potential for collateral damage. In addition, the Martens Clause, added in 1977, bans weapons that violate the “principles of humanity and the dictates of public conscience.” In April, Germany said that it “will not accept that the decision over life and death is taken solely by an autonomous system,” while Japan stated that it “has no plan to develop robots with humans out of the loop, which may be capable of committing murder.” Meanwhile, the United States, despite an official policy prohibiting fully autonomous weapons, argues that a treaty is unnecessary.

Advertisement



On the question of whether machines can judge military necessity, combatant status, and proportionality, the current answer is certainly no: artificial intelligence systems are incapable of exercising the required judgment. Many say that as the technology improves, it will eventually reach a point where the superior effectiveness and selectivity of autonomous weapons can actually save civilian lives, comparable to the use of human soldiers.

This argument is based on the assumption that, after the advent of autonomous weapons, the specific killing opportunities — numbers, times, locations, places, circumstances, victims — will be exactly those that would have occurred with human soldiers. This is rather like assuming that cruise missiles will only be used in exactly those settings where spears would have been used in the past. Obviously, the assumption is false.

Autonomous weapons are completely different from human soldiers and would be used in completely different ways — as weapons of mass destruction, for example. Moreover, even if ethically adequate robots were to become available, there is no guarantee they would be used ethically.

Another line of reasoning used by proponents of autonomous weapons appeals to the importance of retaining “our” freedom of action — where “our” usually refers to the United States. Of course, the consequence of the United States retaining its freedom to develop autonomous weapons is that all other nations will develop those weapons too. Insisting on unfettered freedom of action in international relations is like insisting on the freedom to drive on both sides of the road: If everyone insists that they should have such freedom, the roads will be useless to everyone. When, in 1969, the United States took the unprecedented unilateral decision to renounce biological weapons — a decision that was pivotal in bringing about the biological weapons treaty — the motivation was self-defense. A report commissioned by then-President Nixon had argued persuasively that an arms race in biological weapons would lead many other nations to develop capabilities that would eventually threaten US security. Similar arguments apply to autonomous weapons; a treaty is the only known mechanism to prevent an arms race and the emergence of large-scale weapons manufacturing capabilities.

Advertisement



Of course, treaties are not foolproof. Violations may occur, and some argue that a treaty that prevents “us” (usually the United States) from developing a full-scale lethal autonomous weapons capability will expose “us” to the risk of defeat by those who violate the treaty. All countries need to protect their national security, but this is an argument for a treaty. Yes, there will be nonstate actors who modify pizza-delivery drones to drop bombs. The concern that a military superpower such as the United States could be defeated by small numbers of homemade, weaponized civilian drones is absurd. But some advanced future military technology, produced in huge numbers, might present a threat; preventing such developments is the purpose of a treaty.

In late July, more than 2,800 scientists and engineers from the artificial intelligence and robotics community signed an open letter calling for a ban on lethal autonomous weapons. Without it, we fear there will be an arms race in autonomous weaponry whose outcome can only be catastrophic. Where, exactly, will this arms race lead us?

Advertisement



Current and future developments in robotics and artificial intelligence will be more than adequate to support superior tactical and strategic capabilities for autonomous weapons. They will be constrained only by the laws of physics. For instance, as flying robots become smaller, they become cheaper, more maneuverable, and much harder to shoot down, but their range and endurance also decrease, and they cannot carry heavy missiles.

How can a tiny flying robot, perhaps the size of an insect, kill or incapacitate a human being? Human ingenuity will play a role. The two most obvious solutions — injecting with neurotoxin and blinding with a laser beam — are banned under existing treaties. It is legal, however, to deliver a one-gram shaped charge that suffices to puncture the human cranium and project a hypersonic stream of molten metal through the brain. Alternatively, the robot can easily shoot tiny projectiles through the eyeballs of a human from 30 meters. Larger vehicles can deliver microrobots to the combat zone by the million, providing lethality comparable to that of nuclear weapons. Terrorists will be able to inflict catastrophic damage on civilian populations while dictators can maintain a constant and visible threat of immediate death. In short, humans will be utterly defenseless. This is not a desirable future.

Advertisement



Stuart Russell is professor of computer science at the University of California, Berkeley, and co-author of the Open Letter on Autonomous Weapons.

Related:

Why we need to learn to trust robots

My day as a robot

Alex Beam: Do unto robots ...

2012 | Peter Warren Singer: Robot ethics won’t clean up combat