See all articles

Proportionality assessments under IHL – A human thing?

13.04.2015

The employment of drones for targeted killings has triggered a debate on the use of lethal force without direct human presence at the battlefield. Regarding the legal framework for today’s remotely-piloted drone systems, this debate must be considered settled. Their conduct’s legal evaluation depends on the execution of each specific strike. Generally, their employment will only be legal under the law of armed conflict, IHL, and if IHL is complied with.

Other questions which the current trend of de-humanisation in modern warfare poses, however, remain. One of these concerns the treatment of autonomy on the battlefield. Not only focussing on drones, the 2013 U.S. „Unmanned Systems Integrated Roadmap“, for instance, relies heavily on the further development of autonomous systems. Although not for the near future, the question arises whether fully autonomous systems could comply with the laws applicable to their conduct. This blog entry will focus on the narrower question whether the requirement under IHL for military conduct to be proportional, requires an assessment by human conscience: Intrinsically, this question is linked to whether autonomy on the battlefield is rather a legal or an ethical issue.

The legal framework under IHL

States are not free in their choice of methods or means of warfare. Their choice is, first, limited by what the International Court of Justice called the “cardinal principles” of IHL in its Nuclear Weapons Opinion: The principle of distinction and the prohibition of “weapons, projectiles and material and methods of warfare of a nature to cause superfluous injury or unnecessary suffering.” These principles bind all states as “intransgressible” customary law in conflicts of any character, international as well as non-international. Furthermore, autonomous lethal conduct will have to comply with other precautionary requirements as well, including a proportionality assessment. This, at a first glance, seems rather a task for a human being than for artificial intelligence.

The requirement of proportionality

Under IHL, proportionality of conduct means that civilian casualties caused by an attack must not be excessive in relation to the anticipated concrete and direct military advantage. This test also requires the military commander to perform a weighing up.

In relation to this weighing up, the first question goes to the applicable standard, i.e. the perspective for the assessment. As a military commander always needs to take into account the specifics of the concrete scenario, one approach to this matter takes solely the subjective perspective of the commander as basis for legal evaluations. On the other hand, proportionality is by nature an objective standard. Hence, the evaluation would need to be made in line with best practices and involving a complete assessment of the circumstances.

State practice and jurisprudence of international courts or tribunals is inconsistent in this regard. However, as far as air warfare is concerned, an objective approach has constantly been rejected. It would create a too heavy burden for the pilots, increasing their risk to be object of an attack themselves. Such a risk does not exist for remotely piloted or even autonomous systems. Moreover, as all rules of IHL are based on the principle of humanity, a robot should not be entitled to the same privileges as a human. Consequently, autonomous weapon systems’ conduct should at least be assessed under the standard of the reasonable military commander, a mixture of the strict objective and the subjective standard.

Autonomous Systems and a Margin of Appreciation?

This standard, however, leaves a commander with a certain margin of appreciation, which, usually, would be limited by the commander’s rules of engagement. Transferred to the situation of an autonomous system, such rules of engagement could, of course, be pre-programmed. Still, the question remains whether a robot could take a decision within this margin of appreciation.

As artificial intelligence cannot be considered as developed within the near future that it would fill this gap, other proposals have been made: The first one is to pre-programme autonomous systems in a way that all possible factual scenarios of war were encompassed. Yet, it is totally unfeasible to pre-suppose everything that could happen on a battlefield. Such an all-embracive pre-programming could not guide autonomous systems in their decision.

In addition, the question whether all proportionality requirements could be pre-programmed on an abstract level arises. Certainly, corresponding values could be pre-programmed for certain targets and the respective civilian casualties that would be deemed not excessive. However, such a quantitative approach contradicts the very reason for the proportionality assessment. A killed high-ranking military leader could, for instance, be regarded as having a different relative value from another soldier. In addition, also the ‘big picture’ of the entire conflict impacts these valuations. An approach based on abstractly quantifying the military advantage of an operation is, consequently, not compatible with the proportionality standard under IHL.

Another approach would be to programme autonomous systems to randomly choose one approach from the set that exists within their rules of engagement. The ratio behind this argument is that also different human military commanders would decide diversely and arbitrarily based on their individual backgrounds. This approach, however, cannot be agreed with in light of Article 6 ICCPR. Additionally, human military commanders fill their margin of appreciation with an assessment founded in their human conscience.

The reliance on human conscience seems even more appropriate in light of the Martens Clause. According to the fundamental and customary principles implied by this maxim of IHL, unwritten benchmarks for the evaluation of any targeting decision are always set by established usages, the laws of humanity and the requirements of the public conscience. Autonomous weapon systems do not possess the capability of having recourse to what the public conscience would dictate. Robots need guidance by way of pre-programming. However, especially the Martens Clause illustrates that considerations which cannot be pre-programmed must always be taken into account. Autonomous systems, for these reasons and as a matter of law, cannot be entrusted with the performance of proportionality assessments under IHL.

A response to this text by Rebecca Crootof can be found here. A response to this text by Rieke Arendt can be found here. A response to this text by Felix Boor and Karsten Nowrot can be found here.

 

Sebastian Wuschka, LL.M. (Geneva MIDS), Research Associate, Luther Rechtsanwaltsgesellschaft, Hamburg & Visiting Lecturer, Ruhr University Bochum, Faculty of Law; The views expressed in this blog entry are solely my own. A longer version of the argument will be published in: B. Baade, S. Ehricht, M. Fink, R. Frau, M. Möldner, I. Risini & T. Stirner (eds.), Verhältnismäßigkeit im Völkerrecht (2016).

Cite as: Sebastian Wuschka, “Proportionality Assessments under IHL – A Human Thing?”, Völkerrechtsblog, 13 April 2015, doi: 10.17176/20170403-220547.

Author
Sebastian Wuschka

Sebastian Wuschka is a research fellow at the University of Lausanne, an associated member at Ruhr University Bochum’s Institute for International Law of Peace and Armed Conflict (IFHV) and of counsel with the Complex Disputes team of Luther Rechtsanwaltsgesellschaft mbH in Hamburg.

View profile
Print article
1 Comment
  1. […] A response to Sebastian Wuschka […]

Leave a Reply

We very much welcome your engagement with posts via the comment function but you do so as a guest on our platform. Please note that comments are not published instantly but are reviewed by the Editorial Team to help keep our blog a safe place of constructive engagement for everybody. We expect comments to engage with the arguments of the corresponding blog post and to be free of ad hominem remarks. We reserve the right to withhold the publication of abusive or defamatory comments or comments that constitute hate speech, as well as spam and comments without connection to the respective post.

Submit your Contribution
We welcome contributions on all topics relating to international law and international legal thought. Please take our Directions for Authors and/or Guidelines for Reviews into account.You can send us your text, or get in touch with a preliminary inquiry at:
Subscribe to the Blog
Subscribe to stay informed via e-mail about new posts published on Völkerrechtsblog and enter your e-mail address below.