See all articles

Autonomous weapon systems and proportionality

15.04.2015

A response to Sebastian Wuschka

An autonomous weapon system is “a weapon system that, based on conclusions derived from gathered information and preprogrammed constraints, is capable of independently selecting and engaging targets.” In his recent post, Sebastian Wuschka argues that the use of such weaponry will necessarily violate the law of armed conflict—specifically, the proportionality requirement. Wuschka and I agree that, because artificial intelligence is not now capable of human-like reasoning, we cannot delegate the proportionality analysis to autonomous weapon systems at present. However, Wuschka then concludes that because “[a]utonomous systems . . . cannot be entrusted with the performance of proportionality assessments under IHL,” their use will be a per se legal violation. This is where we part ways: in this post, I discuss how Wuschka’s incorrect conclusion rests on three inaccurate assumptions.

The proportionality requirement

Among other places, the customary in bello proportionality requirement is codified in Article 51 of the First Additional Protocol to the 1949 Geneva Conventions. It prohibits as indiscriminate “[a]n attack which may be expected to cause incidental loss of civilian life, injury to civilians, damage to civilian objects, or a combination thereof, which would be excessive in relation to the concrete and direct military advantage anticipated.”

The military commander who authorizes the attack—not the individual soldier who carries it out—is responsible for conducting the proportionality analysis. Given the difficulty of weighing anticipated incidental harm to civilians and civilian objects against a probable military advantage, commanders enjoy a great deal of discretion in their decisions. Should it be necessary to determine whether a commander is liable for a non-proportional attack, subsequent evaluators will consider the information available at the time and determine whether a reasonable person, making reasonable use of the information, would have expected the attack to cause excessive harm to civilians. This is known as the “reasonable commander” standard.

Autonomous weapon systems are here

Wuschka, like many writing on the subject, assumes that autonomous weapon systems are weapons of the future. However, as I detail in a recent paper, the killer robots are already here. At present, over thirty states employ or have in development weapon systems capable of independently selecting and engaging targets (although the vast majority of these systems are being used under human supervision and in a limited context for practical reasons).

Autonomous weapon systems are weapons, not commanders

Wuschka’s second inappropriate assumption is that autonomous weapon systems are more akin to commanders than to soldiers or—even more aptly—to other weapons. He suggests that the proportionality of autonomous weapon systems’ conduct should be evaluated under the “reasonable commander” standard, implying that he believes autonomous weapon systems are most appropriately analogized to commanders. But, at least at present, autonomous weapon systems are weapons. The “reasonable commander” standard is relevant, but only insofar as it is necessary to evaluate a human commander’s decision to deploy or authorize the potential deployment of an autonomous weapon system.

Autonomous weapon systems are currently being used in compliance with the proportionality requirement

Wuschka’s third inaccurate assumption is that autonomous weapon systems could only be used in a proportional manner if it is possible to pre-program the appropriate response for “all possible factual scenarios of war.” Obviously, one cannot anticipate, let alone address, all possible fact scenarios. But this is not what the law of armed conflict compels. Instead, all that is required is that a weapon is used lawfully given what is known at the time it is deployed. Not only are autonomous weapon systems capable of being used in proportional engagements, they already have been so used.

First, an otherwise autonomous weapon system might be operated in a semi-autonomous or human-supervised mode. The South Korean SGR-AI, a stationary armed robot that monitors the demilitarized zone, allegedly has an autonomous mode where it can select and engage targets with no human oversight—thus, it is an autonomous weapon system. However, it is reportedly operated in a semi-autonomous mode, such that it only uses force after consultation with human supervisors (and, presumably, in compliance with the proportionality requirement).

Alternatively, a commander might lawfully authorize the use of an autonomous weapon system if he or she first determines that any of its possible actions within a defined battlespace and limited temporal span would comply with the proportionality requirement. For example, the Israeli Harpy is an autonomous weapon system designed to detect, attack, and destroy enemy radar emitters. Although it independently selects and engages targets, and by extension the individual launching it doesn’t know which specific radars will be attacked, the commander authorizing its use knows that it will only engage radars within the programmed parameters over the course of a limited amount of time and within a certain space, permitting an ex ante proportionality evaluation.

Just like any other weapon, autonomous weapon systems can be used in non-proportional ways. This does not mean their use is per se unlawful; instead, it underscores the importance of military commanders having an adequate training in what any given weapon can and cannot do (as U.S. policy requires). Given the current state of the technology, a reasonable commander should not authorize the deployment of an autonomous weapon system without training in and knowledge of its capabilities and likely actions. And, as a general rule, their use should not be authorized for a long-range unsupervised mission, in unknown territory or environments, or in any other situation where the weapon might respond unpredictably and thus might potentially violate the proportionality requirement.

Conclusion

Not only can autonomous weapon systems be used in compliance with the proportionality requirement, they are already being lawfully used today. The crucial question is therefore not whether the use of autonomous weapon systems should be prohibited as per se unlawful, but rather how best to regulate their use.

A response to this text by Rieke Arendt can be found here.  A response to this text by Felix Boor and Karsten Nowrot can be found here.

 

Rebecca Crootof, PhD Candidate in Law, Yale Graduate School of Arts and Sciences, Resident Fellow, Yale Law School Information Society Project

Cite as: Rebecca Crootof, “Autonomous Weapon Systems and Proportionality”, Völkerrechtsblog, 15 April 2015, doi: 10.17176/20170403-220952.

Author
Rebecca Crootof
View profile
Print article
1 Comment
  1. […] A response to this text by Rebecca Crootof can be found here. […]

Leave a Reply

We very much welcome your engagement with posts via the comment function but you do so as a guest on our platform. Please note that comments are not published instantly but are reviewed by the Editorial Team to help keep our blog a safe place of constructive engagement for everybody. We expect comments to engage with the arguments of the corresponding blog post and to be free of ad hominem remarks. We reserve the right to withhold the publication of abusive or defamatory comments or comments that constitute hate speech, as well as spam and comments without connection to the respective post.

Submit your Contribution
We welcome contributions on all topics relating to international law and international legal thought. Please take our Directions for Authors and/or Guidelines for Reviews into account.You can send us your text, or get in touch with a preliminary inquiry at:
Subscribe to the Blog
Subscribe to stay informed via e-mail about new posts published on Völkerrechtsblog and enter your e-mail address below.