fb-pixel Skip to main content
OPINION

The question of autonomous systems is not why but how

A team of combined human soldiers and autonomous systems have the potential capability of independently and objectively monitoring ethical behavior on the battlefield by all parties.

Adobe/ARTYuSTUDIO/Globe staff

The wholesale slaughter of civilians in warfare is utterly unacceptable. Humanity has waged war since time immemorial despite the disastrous consequences to noncombatants. Civilian deaths are extraordinarily high even today. The Geneva Conventions and their Additional Protocols have helped codify what is and isn’t acceptable in the way in which we kill each other on and off the battlefield. But it has not kept pace with new technology.

Technology including robotics can, must, and should play a role in reducing, if not preventing, this horrific and barbaric side effect. The question is not why but how. Lethal Autonomous Weapon Systems — “killer robots,” in the popular press — may counterintuitively help protect civilians better than soldiers, thus reducing collateral damage while still achieving the goals of the mission. They should work in conjunction with flesh-and-blood soldiers, not replace them, to ensure we don’t forget how truly horrible war is.

Advertisement



Why would the military want LAWS? Robots allow a single warfighter to do the job of many. They can potentially fight over larger areas for longer times, 24/7, with no sleep, no bathroom breaks; they can allow a soldier to see farther and reach farther in the battlefield using the robot as a proxy and, perhaps most important, reduce friendly and noncombatant casualties, assuming all this is done correctly.

The underlying question is, can robots ultimately have better legal and ethical compliance with International Humanitarian Law than human beings in military situations? While it is not my belief that a lethal autonomous system will be able to be perfectly ethical in the battlefield, I am convinced that they will eventually be able to perform more ethically than human soldiers.

What is the basis for this belief? Unfortunately, on average, warfighter compliance with humanitarian laws leaves much to be desired. There are clear individual exceptions to this, and robots will not be heroes. But as well documented in the surgeon general’s Final Report Operation Iraqi Freedom 05-07 2006, respect for noncombatant life is lagging. As a result, civilian casualties occur at staggering rates due to carelessness, atrocities, anger, and frustration, which unfortunately are a part of the human condition.

Advertisement



Why may LAWS be able to perform better than humans under battlefield conditions? Their ability to act conservatively. For humans, the stakes are far higher — they could die in combat. LAWS can assume far more risk because, for them, there are no such consequences. There is no inherent right of self-defense, and they can be sacrificed by a commander if needed to better protect both noncombatants and friendly forces. Robots probably will possess a range of sensors better equipped for battlefield observations than humans have, cutting through the fog of war. They can be designed without emotions that can cloud judgment or result in anger and frustration with ongoing battlefield events. They integrate more information from more sources far faster before responding with lethal force than a human possibly could in real-time. Finally, when working in a team of combined human soldiers and autonomous systems, they have the potential capability of independently and objectively monitoring ethical behavior in the battlefield by all parties and reporting human rights infractions that might be observed.

I have long argued that the threshold for acceptance of this technology should be the ability for it to outperform soldiers with respect to noncombatant casualties. This would be the creation of the next generation of precision-guided munitions — guided now in the sense of avoiding civilian casualties and not simply with respect to target acquisition.

Advertisement



What to do then? Three things:

▪ Continue research into embedding robots and artificial intelligence with ethical, legal, and moral reasoning.

▪ Ensure they are not released into warfare until they have demonstrated humanitarian benefits.

▪ And try and come to grips through rational discussion regarding the way forward, both domestically and internationally.

Fifteen years ago, few were talking about these issues, but now there is a cacophony of voices leading to confusion, continuing arguments over definitions, and politics blocking progress. Even widely disparate points of view can yield ways forward, as evidenced by a recent article published by a group of researchers, including myself, with radically differing points of view.

Fearmongering and allusions to science fiction serve little useful purpose in moving the discussion forward. No one wants the Terminator. The desiderata include providing the best technology for our young men and women who we consistently put into harm’s way; reducing civilian harm while maintaining mission effectiveness; and doing all this without losing our core humanitarian principles, many of which are enshrined in international humanitarian law already. A moratorium on LAWS is appropriate until they can be shown to be safe and effective, instead of shoot first and ask questions later. If collateral damage cannot be reduced using this technology, then indeed it should be banned. But we will not have that answer until further research is completed.

Advertisement



As stated in my book “Governing Lethal Behavior in Autonomous Robots,” my personal hope would be that LAWS will never be needed in the present or the future. But humankind’s tendency toward war seems overwhelming and inevitable. At the least, if we can reduce civilian casualties according to what the Geneva Conventions have promoted and the Just War Theory tradition subscribes to, the result will have been a humanitarian effort, even while staring directly into the face of war.

Ronald C. Arkin is regents' professor and director of the Mobile Robot Laboratory in the College of Computing at Georgia Tech.