{"id":5878,"date":"2020-11-30T00:00:00","date_gmt":"2020-11-30T14:11:52","guid":{"rendered":"https:\/\/voelkerrechtsblog.org\/?post_type=articles&#038;p=5878"},"modified":"2021-01-08T09:48:49","modified_gmt":"2021-01-08T08:48:49","slug":"the-new-era-of-disinformation-wars","status":"publish","type":"post","link":"https:\/\/voelkerrechtsblog.org\/de\/the-new-era-of-disinformation-wars\/","title":{"rendered":"The new era of disinformation wars"},"content":{"rendered":"<p>While the manipulation of photographs has <a href=\"https:\/\/www.dailymail.co.uk\/news\/article-4984364\/How-Hitler-Mussolini-Lenin-used-photo-editing.html\">traditionally been deemed a State intelligence privilege<\/a>, today\u2019s technological evolution allows anyone to effortlessly modify digital material \u2013 deepfakes being the newest, and arguably most dangerous, trend of such practices. Deepfake algorithms use so-called \u2018deep learning\u2019 artificial intelligence (AI) to create new audio and video by replacing or merging one\u2019s voice and\/or face with manipulated and artificial data, which automatically fits the output dimensions and conditions. <a href=\"https:\/\/www.cpomagazine.com\/cyber-security\/the-cutting-edge-of-ai-cyber-attacks-deepfake-audio-used-to-impersonate-senior-executives\/\">A short recording of one\u2019s voice suffices<\/a> for an AI to create a \u201cvoice skin\u201d that can be processed to say virtually anything. Although deepfakes are currently <a href=\"https:\/\/www.theguardian.com\/technology\/2018\/nov\/12\/deep-fakes-fake-news-truth\">mainly used for humoristic purposes<\/a> (see examples <a href=\"https:\/\/www.youtube.com\/watch?v=gLoI9hAX9dw\">here<\/a>), their use for <a href=\"https:\/\/peakrevenuelearning.com\/news-tips\/2019\/9\/16\/forget-email-scammers-use-ceo-voice-deepfakes-to-con-workers-into-wiring-cash\">malicious<\/a> and military purposes seems inevitable. Indeed, deepfakes offer the potential to deceive and misinform adversaries and gain significant military advantages, while debunking and attributing the misinformation remains highly difficult. Against this background, and with social media spreading information to a massive community within seconds, feigning an alternative reality may set off an uncontrollable chain of events with detrimental consequences for the civilian population in conflict-ridden areas.<\/p>\n<p>Aiming to determine whether international humanitarian law (IHL) sufficiently regulates the use of deepfakes, this Bofax examines if and how existing IHL norms apply to deepfakes, particularly against the backdrop of the <a href=\"https:\/\/www.cambridge.org\/core\/books\/tallinn-manual-20-on-the-international-law-applicable-to-cyber-operations\/E4FFD83EA790D7C4C3C28FC9CA2FB6C9\">2017 Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations<\/a> (Tallinn Manual 2.0).<\/p>\n<p><a href=\"https:\/\/ihl-databases.icrc.org\/applic\/ihl\/ihl.nsf\/Article.xsp?action=openDocument&amp;documentId=FEB84E9C01DDC926C12563CD0051DAF7\">Article 36 Additional Protocol I<\/a> to the Geneva Conventions (API) facilitates the application of IHL to contemporary developments by demanding the compliance of \u2018new\u2019 means and methods of warfare with established IHL principles. However, as is the case with all cyber operations, the applicability of IHL rules to deepfakes proves to be no clear-cut a matter. The most comprehensive, yet non-binding, international guideline on cyber warfare is the Tallinn Manual 2.0, which unfortunately only sparsely touches on the implication of different forms of disinformation in armed conflict and makes no mentioning of deepfakes. According to Rule 80 Tallinn Manual 2.0, the <em>existence<\/em> of an armed conflict is a prerequisite for the applicability of IHL to cyber operations. Thus, deepfakes which are employed during an ongoing armed conflict, are governed by the same IHL rules as the \u2018traditional\u2019 means and methods of warfare employed in that conflict \u2013 notwithstanding that these rules might not be sufficient or appropriate in the context of information warfare.<\/p>\n<p><strong>Perfidy and ruses of war<\/strong><\/p>\n<p>The deception of an adversary in armed conflicts by dissemination of false information is a <a href=\"https:\/\/perma.cc\/35HX-N7LN\">contemporary method of warfare<\/a>. In principle, deepfakes are nothing but a more sophisticated, hyper-realistic continuation of this practice, as the following examples show:<\/p>\n<p>Example 1: State A produces a deepfake of a representative of the International Committee of the Red Cross inviting both adversaries to e. g. peace talks. State A sends the deepfake to the military commander of State B with the intent to attack State B\u2019s representatives on their way to the faked meeting.<\/p>\n<p>Example 2: State A produces a deepfake in which State B\u2019s military commander orders the armed forces of State B to retreat from strategically important cities. State A then spreads the deepfake video to the armed forces of State B via social media, with the intent of gaining a military advantage.<\/p>\n<p>IHL offers black letter law to determine which acts of deception are permitted. According to <a href=\"https:\/\/ihl-databases.icrc.org\/applic\/ihl\/ihl.nsf\/Article.xsp?action=openDocument&amp;documentId=3EA868BE16BCBB86C12563CD0051DB0B\">Article\u00a037(1) API<\/a>, perfidious acts \u2013 those which invite particular confidence in the adversary and intend to betray that confidence \u2013 are prohibited. Example 1 clearly falls within the scope of Article 37(1) API and is therefore prohibited.<\/p>\n<p>In contrast, under <a href=\"https:\/\/ihl-databases.icrc.org\/applic\/ihl\/ihl.nsf\/Article.xsp?action=openDocument&amp;documentId=3EA868BE16BCBB86C12563CD0051DB0B\">Article 37(2) API<\/a> \u201cacts which are intended to mislead an adversary or to induce him to act recklessly but which infringe no rule of international law applicable in armed conflict and which are not perfidious because they do not invite the confidence of an adversary\u201d constitute permitted ruses of war. Article 37(2) API names \u201cmisinformation\u201d as an example of a permissible ruse. Rule 123 Tallinn Manual 2.0 cites <em>inter alia<\/em> the spreading of disinformation causing an adversary to erroneously believe a false appearance of what is actually happening, and \u201cbogus orders purporting to have been issued by the enemy commander\u201d as prime examples of permissible ruses. Accordingly, example 2 will most likely be considered as permissible ruse, provided that no other IHL is violated.<\/p>\n<p><strong>Disinformation, civilians, and the notion of \u201cattack\u201d<\/strong><\/p>\n<p>In the assessment of the legality of deepfakes, their impact on the civilian population is pivotal.<\/p>\n<p>The duty of constant care (<a href=\"https:\/\/ihl-databases.icrc.org\/applic\/ihl\/ihl.nsf\/Article.xsp?action=openDocument&amp;documentId=50FB5579FB098FAAC12563CD0051DD7C\">Article 57(1) API<\/a>), the principles of distinction (<a href=\"https:\/\/ihl-databases.icrc.org\/applic\/ihl\/ihl.nsf\/Article.xsp?action=openDocument&amp;documentId=8A9E7E14C63C7F30C12563CD0051DC5C\">Article 48 API<\/a>) and proportionality (<a href=\"https:\/\/ihl-databases.icrc.org\/applic\/ihl\/ihl.nsf\/Article.xsp?action=openDocument&amp;documentId=4BEBD9920AE0AEAEC12563CD0051DC9E\">Article 51(5)(b) API<\/a>), and the prohibition of acts whose primary aim is to spread terror among the civilian population (<a href=\"https:\/\/ihl-databases.icrc.org\/applic\/ihl\/ihl.nsf\/Article.xsp?action=openDocument&amp;documentId=4BEBD9920AE0AEAEC12563CD0051DC9E\">Article 51(2) API<\/a>, <a href=\"https:\/\/ihl-databases.icrc.org\/applic\/ihl\/ihl.nsf\/Article.xsp?action=openDocument&amp;documentId=A366465E238B1934C12563CD0051E8A0\">Article 13(2) Additional Protocol II<\/a> to the Geneva Conventions), can possibly limit the lawful usage of deepfakes in armed conflicts.<\/p>\n<p>Example 3: State A produces a deepfake depicting a conversation between State B\u2019s president and military commander about an imminent nuclear attack on the capital city of State C. State A disseminates the deepfake through social media platforms to primarily cause panic across the civilian populations of States B and C.<\/p>\n<p>While the duty of constant care and the principle of distinction apply in all military operations, the principle of proportionality and the prohibition of acts whose primary aim is to spread terror, only govern \u201cattacks\u201d, acts of violence, or threats thereof. This is where specific difficulties regarding the applicability to and classification of deepfakes arise. There is virtually no case law to define what constitutes an \u2018attack\u2019 in the context of cyber conflicts. \u201cAttack\u201d, as per <a href=\"https:\/\/ihl-databases.icrc.org\/applic\/ihl\/ihl.nsf\/Article.xsp?action=openDocument&amp;documentId=17E741D8E459DE2FC12563CD0051DC6C\">Article 49(1) API<\/a>, \u201cmeans acts of violence against the adversary [\u2026].\u201d Rule 92 Tallinn Manual 2.0 states that violence \u201cmust be considered in the sense of violent consequences and is not limited to violent acts.\u201d However, according to Rule 98, a Twitter message \u201csent out in order to cause panic, falsely indicating that a highly contagious and deadly disease is spreading rapidly throughout the population [\u2026] is neither an attack [\u2026] nor a threat thereof [\u2026]\u201d and consequently does not violate Article 51(2) API. But can fake news in no circumstance be an attack, act of violence, or threat thereof?<\/p>\n<p>While the exemplary tweet disseminates \u2018news\u2019 without claiming state involvement, the deepfake illustrated in example 3 is of an entirely different quality. As a deepfake can only be recognized as such with great difficulty and therefore will \u2013 at least initially \u2013 be perceived as authentic, the effects on State C and its civilian population will certainly not be any less grave than those of a real announcement of a nuclear attack. However, Rule 92 Tallinn Manual 2.0 deems operations causing \u201cinconvenience or irritation\u201d without foreseeably resulting in injury of individuals or damage of physical objects lawful. Thus, under the current framework, due to a lack of foreseeable injury, example 3 would be considered a lawful non-attack, although the resulting panic and terror could be tremendous.<\/p>\n<p>Similarly, the distribution of the deepfake under example 2 via social media raises questions regarding its conformity with the principle of distinction and the prohibition of indiscriminate attacks (<a href=\"https:\/\/ihl-databases.icrc.org\/applic\/ihl\/ihl.nsf\/Article.xsp?action=openDocument&amp;documentId=4BEBD9920AE0AEAEC12563CD0051DC9E\">Article 51(4) API<\/a>). The interconnectedness of cyberspace makes it practically impossible to strictly <a href=\"https:\/\/www.armyupress.army.mil\/Journals\/NCO-Journal\/Archives\/2018\/June\/Soldiers-and-Social-Media\/\">distinguish between civil and military uses of (social) media<\/a> and reasonably foreseeable that any deepfake can (accidentally) fall into the hands of civilians. Regardless of whether the objectives of the deepfake are civilian or military, the notion of \u2018attack\u2019 is decisive. Rule 93 Tallinn Manual 2.0 (Distinction) stipulates that operations directed at civilians are only prohibited when they amount to an \u2018attack\u2019; operations directed at military objectives must comply with the principle of proportionality \u2013 which only applies to attacks. According to Rule 105 Tallinn Manual 2.0, cyber weapons creating a chain of events beyond the control of the attacker are indiscriminate by nature. While <a href=\"https:\/\/www.theguardian.com\/commentisfree\/2018\/mar\/19\/fake-news-social-media-twitter-mit-journalism\">social media spreads information uncontrollably<\/a>, as long as the deepfake does not <em>foreseeably<\/em> cause injury or damage \u2013 which the deepfake of example 2 clearly does not \u2013 it is not indiscriminate. Accordingly, also example 2 would be lawful.<\/p>\n<p><strong>Conclusion<\/strong><\/p>\n<p>Even though IHL can somewhat grasp the concept of disinformation in warfare due to the longstanding practice, the existing legal framework is not equipped to appropriately react to the dimensions deepfakes add to the equation. Due to technological advances, it has become necessary for the law to differentiate between the available forms and contents of \u2018fake news\u2019. The most pressing issues, namely the subsumption of deepfakes under the notions of \u2018attack\u2019, \u2018act of violence\u2019, or threat thereof, and the requirements of foreseeability and degree of possible harm, are in dire need for clarification, as both are determinate for the applicability of pertinent IHL norms and the protection of civilians. First steps in countering deepfakes could be the installment of safeguard-mechanisms, e.g. <a href=\"https:\/\/engineering.nyu.edu\/news\/outsmarting-deep-fakes-researchers-devise-ai-driven-imaging-system-protects-authenticity\">digital watermarks<\/a>, and including the concept of deepfakes in future international manuals to facilitate the discourse between states and ascertain current <em>opinio iuris<\/em>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>While the manipulation of photographs has traditionally been deemed a State intelligence privilege, today\u2019s technological evolution allows anyone to effortlessly modify digital material \u2013 deepfakes being the newest, and arguably most dangerous, trend of such practices. Deepfake algorithms use so-called \u2018deep learning\u2019 artificial intelligence (AI) to create new audio and video by replacing or merging [&hellip;]<\/p>\n","protected":false},"author":8,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[6639],"tags":[],"authors":[6056],"article-categories":[5108],"doi":[],"class_list":["post-5878","post","type-post","status-publish","format-standard","hentry","category-uncategorized","authors-lisa-m-cohen","article-categories-bofaxe"],"acf":{"subline":"Does international humanitarian law sufficiently regulate the use of deepfakes?"},"meta_box":{"doi":"10.17176\/20210107-183308-0"},"_links":{"self":[{"href":"https:\/\/voelkerrechtsblog.org\/de\/wp-json\/wp\/v2\/posts\/5878","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/voelkerrechtsblog.org\/de\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/voelkerrechtsblog.org\/de\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/voelkerrechtsblog.org\/de\/wp-json\/wp\/v2\/users\/8"}],"replies":[{"embeddable":true,"href":"https:\/\/voelkerrechtsblog.org\/de\/wp-json\/wp\/v2\/comments?post=5878"}],"version-history":[{"count":1,"href":"https:\/\/voelkerrechtsblog.org\/de\/wp-json\/wp\/v2\/posts\/5878\/revisions"}],"predecessor-version":[{"id":11244,"href":"https:\/\/voelkerrechtsblog.org\/de\/wp-json\/wp\/v2\/posts\/5878\/revisions\/11244"}],"wp:attachment":[{"href":"https:\/\/voelkerrechtsblog.org\/de\/wp-json\/wp\/v2\/media?parent=5878"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/voelkerrechtsblog.org\/de\/wp-json\/wp\/v2\/categories?post=5878"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/voelkerrechtsblog.org\/de\/wp-json\/wp\/v2\/tags?post=5878"},{"taxonomy":"authors","embeddable":true,"href":"https:\/\/voelkerrechtsblog.org\/de\/wp-json\/wp\/v2\/authors?post=5878"},{"taxonomy":"article-categories","embeddable":true,"href":"https:\/\/voelkerrechtsblog.org\/de\/wp-json\/wp\/v2\/article-categories?post=5878"},{"taxonomy":"doi","embeddable":true,"href":"https:\/\/voelkerrechtsblog.org\/de\/wp-json\/wp\/v2\/doi?post=5878"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}