See all articles

Autonomous weapon systems and proportionality

27.05.2015

A response to Sebastian Wuschka and Rebecca Crootof

Recently, two statements on autonomous weapon systems have been published on this blog. In his post, Sebastian Wuschka argues that, because they are not human, autonomous weapon systems “can never be entrusted with the performance of proportionality assessments under IHL”. In her response, Rebecca Crootof states that this is not even necessary, given that it is incumbent on the human commander alone to carry out the proportionality assessment before deploying or authorizing the deployment of an autonomous system. In the words that follow, I would like to outline why, with truly autonomous systems, no responsibility for the proportionality assessment lies with the human commanders. However, as will be shown below, the incompatibility to perform a proportionality analysis does not necessarily render an autonomous system illegal under IHL. As a side note, finally, I will also show that, if Rebecca’s definition of autonomy is applied correctly, autonomous systems are not, in fact, “here to stay”.

Defogging the terminology

In her post, Rebecca defines autonomous weapon systems as “systems that based on conclusions derived from gathered information and preprogrammed constraints are capable of independently selecting and engaging targets”. I agree with her definition in as much as it captures the distinction between autonomous systems and systems which are merely automated.

In her article, Rebecca further comments on how a philosopher, a political scientist and an engineer all have different approaches to the concept of autonomy, which is determined by the focus of their respective fields. In my opinion, the relevant criterion from the legal point of view can only be whether the selection of targets entirely relies on a decision (pre)made by humans, or whether a certain margin of appreciation rests with the machine. This is because, next to setting down criteria on lawful targets (e.g. combatants/civilians; military objectives/civilian objects), the law of targeting also determines the standards of the lawful targeting process (e.g. precautionary measures). It is therefore imperative to know to whom (human or machine) those standards apply. If a system is to be considered automated under the legal definition, precautionary measures have to be taken solely by the human commander sending the system into battle. If, however, a system is autonomous and, therefore, chooses its own targets, the duty to take precautionary measures falls – at least partly – upon the system itself.

Therefore, automated systems are systems with quantified targeting input parameters which leave no margin of appreciation (input a = output b), whereas autonomous systems are systems with quantified targeting input parameters that leave a certain margin of appreciation (input a = output b, c or d).

Autonomous systems are not (yet) here – automated systems are

If we apply this definition to existing weapon systems, I cannot agree with Rebecca that “autonomous systems are here to stay”. Indeed, taking her two examples of the Samsung SGR- A1 and the IAI Harpy I conclude that none of the two systems qualify as autonomous.

According to the available information, the SGR A1 gathers information via cameras, before processing the information using software that allows it to distinguish moving, human like objects from non-moving objects. Whenever a human like object is identified, the SGR-A1 sets off its arms (in practice this is alternatively preceded by a verbal identification process or an authorization by a human operator). Therefore, the input “human-like object” forcibly entails the output “set off gun” (occasionally interrupted by the additional loop ways) with no margin of appreciation for the system.

Nothing else can be said about the IAI Harpy, a system designed to detect and destroy enemy radar stations. According to my knowledge, once the parameters of a radar station are met, the system will fire and destroy the identified target with no margin of appreciation – for example, to choose the second or the third radar station encountered. Put differently, the input “radar station” leads to the output “fire”.

A means of warfare having to meet the standard of a reasonable commander

If, following this definition, we hold that the determining criterion of autonomy in the legal sense is whether or not the selection of targets relies on a margin of appreciation exercised by the system itself, Sebastian Wuschka is right in his assumption that an autonomous weapon system has to be equated to a military commander. This does not mean that the system itself qualifies as a combatant, but rather defines the targeting standard which an autonomous system has to meet. As the proportionality of an attack which is likely to cause “incidental loss of civilian life, injury to civilians or damage to civilian objects” needs to be evaluated by “those who plan or decide upon an attack” (Art. 57(2)(a) AP I), the final proportionality assessment falls to the autonomous system.

A proportionality assessment is made in two steps. In the first, the military advantage of the attack and the expected loss of civilian life, injury to civilians or damage to civilian objects have to be evaluated. In my opinion, this first step does not contain any inherent problems because computer programs simulating the effects of an attack are already in use today, assisting military commanders in making proportionality assessments. The same programs could, therefore, be easily built into autonomous systems. In the second step, the military advantage has to be weighed against the expected loss and damage caused by the attack. I agree with Rebecca that autonomous systems will probably be programmed for a mission close at hand. As the operational area is known, and the expected targets will therefore be determinable, it would be possible to pre-assign a certain value to specific targets. Thus, high priority targets like military bases could be assigned a high impact on the military advantage calculation, whereas vulnerable civilian objects like schools could be assigned a high factor on the collateral damage calculation. This could also be done for human targets, as the possibilities of facial recognition software are already fairly advanced.

Nonetheless, as Sebastian has pointed out correctly, a method to quantify the ultimate weighing process has yet to be found.

No inherent violations of IHL

This does not, however, affect the legality of the use of autonomous weapon systems under IHL. Without further commenting on a potential violation of Art. 6 ICCPR (which I doubt to be applicable) and the Martens’ clause (which I doubt to be binding law for targeting decisions), I think there is no absolute need for autonomous systems to be able to comply with the principle of proportionality. If no means of quantifying the second step of the proportionality assessment can be found, then autonomous weapon systems are simply not allowed to perform the proportionality assessment. In full consequence, this means that they can only legally conduct attacks which are not likely to cause incidental damage to civilians or civilian objects, that is, attacks that are likely to affect solely combatants or military objectives. The assessment of whether or not this is the case concerning the individual attack at hand is then to be made by the system using the above-mentioned simulation programs.

It is therefore to conclude that a general prohibition of the use of autonomous weapon systems can, at least, not be based upon the principle of proportionality.

 

Rieke Arendt, LL.M. (Cantab.), PhD Candidate in Law, University of Potsdam.

 

Cite as: Rieke Arendt, “Autonomous Weapon Systems and Proportionality”, Völkerrechtsblog, 27 May 2015, doi: 10.17176/20170421-172502.

Author
Rieke Arendt
View profile
Print article

Leave a Reply

We very much welcome your engagement with posts via the comment function but you do so as a guest on our platform. Please note that comments are not published instantly but are reviewed by the Editorial Team to help keep our blog a safe place of constructive engagement for everybody. We expect comments to engage with the arguments of the corresponding blog post and to be free of ad hominem remarks. We reserve the right to withhold the publication of abusive or defamatory comments or comments that constitute hate speech, as well as spam and comments without connection to the respective post.

Submit your Contribution
We welcome contributions on all topics relating to international law and international legal thought. Please take our Directions for Authors and/or Guidelines for Reviews into account.You can send us your text, or get in touch with a preliminary inquiry at:
Subscribe to the Blog
Subscribe to stay informed via e-mail about new posts published on Völkerrechtsblog and enter your e-mail address below.