Before writing The End of the American Empire, I was fortunate to study at the prestigious War Studies department at King’s College London. The program - International Relations and Contemporary Warfare - spanned three years and allowed me to explore a range of compelling, often urgent topics. Though I didn’t realise it at the time, this course lit the fuse for much of my later work. It laid the intellectual and emotional foundations that would eventually shape my first book - and continue to shape my thinking today as I write my second.
Recently, I’ve been struck by how many of those early essays remain just as relevant now as when I first wrote them. So, I’ve decided to release a selection of them - unedited and in their original form - through my Substack, one each month.
As I re-read these essays, I’m intrigued to see how my views have evolved, and how many of the seeds planted back then have grown into the core arguments of my published writing.
Join me over the coming months for a deep dive into some of the most pressing questions of our time, beginning with a piece from 2020 that feels more relevant than ever: “Should Killer Robots Be Banned?” I’ve also filmed a video on the topic which you can check out on my Substack.
Thanks for reading - and let me know what you think.
SHOULD KILLER ROBOTS BE BANNED?
By Patrick Watts, 2020.
This question involves the moral question of whether Killer robots should be banned, but also the practical question of whether a ban or even treaty regulating use would be effective, or enforceable. This study concentrates on human-out-of-the-loop Autonomous weapons systems (AWS), but also includes their precursor; semi-autonomous weapons, which inform the discussion. This is also a question of means or methods, and which should, and indeed could, be best regulated.
semi-autonomous
Semi-autonomous weapons have been deployed for decades; air defence systems were used in World War II, but this question focuses on offensive weaponry. Proliferation deems prohibition impossible; UAVs are the cornerstone of the US drone war program, and other states like Iran are now realizing their potential for use in smaller operations. UAV criticism is often couched in terms of means; removing the protagonist far from the battlefield simplifies the decision to engage, and killing is depersonalized. This argument has merit, but the aim of every military advancement in history is to simplify the act of killing whilst simultaneously reducing the risk of harm to oneself. UAVs provide the means to kill further from the battlefield than ever before, but this is simply a natural progression; longbow to rifle to missile. It is difficult to argue that returning to the bludgeoning, although undoubtedly more personal, hand to hand combat of medieval times would represent an improvement.
The issue of legality of UAV assassination policy is however worthy of discussion and should be viewed as a question of methods, not means. It raises questions of what constitutes a battlefield in the modern age, and around the status of combatants and non-combatants. Critics rightly deride the normalization of UAV assassination through “official secrecy, government propaganda and some uncritical press coverage.” This has emboldened policy makers, no longer feeling required to claim legal justification in the basis of self - defence or imminent threat. President Trump claimed that legal justification for the drone attack on General Suleimani “doesn’t really matter” due to “the terrorist’s horrible past”. The concern is the method of waging war by normalizing state policies of extra judicial assassination. It is no different to kill via drone than by sniper rifle of a covert special forces unit, but this increasingly brazen violation of International Law attracts greater scrutiny.
AWS
It must be stated that true AI is, with optimistic estimates, unlikely to exist for at least 3 decades. This is not guaranteed, and it is possible that innovations such as machine learning, when fully understood, will shorten this timeframe. Elon Musk has warned that AI is akin to “summoning the devil” and becoming “an immortal dictator from which we would never escape”. In terms of Geopolitical ramifications, Vladimir Putin predicted in 2017 that “Whoever becomes the leader in the AI sphere will become ruler of the world”, whilst China is aggressively pursuing military AI as “a major strategy to enhance competitiveness and protect national security.” The US has set out a vision to 2036 beginning with “a current goal of supervised autonomy (Human-on-the-loop)” but with an ultimate aim of full autonomy”. AWS present a number of legal and ethical concerns and whilst examination of the ramifications of these developments for military use has begun, more study is required.
Legality and Responsibility
As a weapon of war, there is no difference between AWS and any other in terms of adhering to the principles and responsibilities of Jus in Bello; targets must be lawful, attacks proportionate. With semi- autonomous systems, human involvement also allows a degree of discretion and interpretation that is unlikely to be possible with AWS. AWS cannot therefore enjoy existing legal flexibility for misjudgements based on; “fog of war”, assessments of proportionality, collateral damage and relative military advantage. AWS could arguably reduce these misjudgements due to enhanced computing and sensory data processing abilities, and non-human rationality, but improvements are required; identification errors and even unconscious bias made by program algorithms are commonplace.
The major issue centers on accountability. Who is ultimately responsible if a future AWS is adjudged to have violated the principles of IHL or Jus In Bello? the field commander who gave the order to deploy or the General who devised the strategy? The President who approved the original AWS strategy or the engineer who wrote the algorithm? This accountability dilemma has hampered the development of Autonomous cars, but has far graver consequences with military applications. The idea of individual responsibility has been enshrined in International law since Nurenberg, and although scholars like James Walsh have tried, it is not clear how it can be applied for AWS in its current form.
Is regulation viable?
In 2012, an influential report recommended amongst other things:
“the prohibition of the development, production and use of fully autonomous weapons through an international legally binding instrument.”
Whilst a noble sentiment, the authors can be accused of naivety. Even in 2012 there existed an international arms race toward AWS, and no state would willingly be left behind for fear of the consequences. This is the classic Prisoner’s Dilemma from any arms race throughout history. From a US perspective, the desire to retain military superiority over its rivals would make any action to limit its development in this field injurious. For rivals like China, the opportunity to close the military gap or even surpass using new technology is simply too good to miss. This applies also to the idea of human enhancement “super soldier” programs, using biological and technological methods, which raise ethical questions in the West, but are pursued vigorously in China. Even Vladimir Putin appears concerned at the prospect of a “man who can fight without fear, compassion, regret or pain,” although recent moves to mirror Chinese advances in the field of Genetic engineering show his primary fear remains of being left behind.
The second question regards enforcement. An obligation exists for states to conduct a “thorough review” of the legality of any new weapons systems before deployment, to ensure compliance with International Humanitarian law. This is a toothless threat and unenforceable in practice. Nuclear weaponry was only curtailed due to the threat of Mutually Assured Destruction, but even this taboo has reentered military conversations, with the US and its rivals reneging or reinterpreting existing commitments and treaties, attempting to find a strategic advantage. The use of poison gas in Syria, a weapon banned 23 years ago and “red line” for the International community, did not lead to collective punishment as set out in the UN charter. In the last 2 decades, collective enforcement has weakened, whilst unilateral military actions have increased. This is due to increasingly assertive foreign policies from Russia and China, but primarily due to the US losing its status as moral leader and driver of collective security through; the illegal Iraq war, torture redefined as “enhanced interrogation”, extra judicial assassination programs, and recently threats to destroy Iranian cultural sites in direct contravention of the Hague convention. Regardless of what weaponry is discussed, the shortcomings of collective security remain; treaties are not signed by every state, realist goals supersede treaty commitments and a fractured international community lacks the desire or consensus to act as one.
A further question would address the effectiveness of UAV policy. It removes the threat to soldiers but at what cost? Populations living under constant fear of death, coupled with civilian deaths as collateral damage or mistake identity, only erodes faith in international law and act as recruitment tool for those wishing to radicalize the very civilians you aim to save. This is counter-productive at best and the definition of terrorism at worst.
Conclusion
It is unrealistic to discuss “banning” killer robots, as progress toward their adoption is ongoing and inevitable. Attempts should be made however to develop a legal and ethical framework to; regulate the criteria for their deployment, scope of their actions, and enshrine in law who is ultimately responsibility for these actions. If consensus could be achieved it would reduce the chance of mistakes, oversteps and frivolous use on the battlefield. Ultimately, all new technologies are created by humans, for humans, so it is the methods that need to be scrutinized not simply the means. A knife in the wrong hands can end a life, but wielded by a surgeon it can save one. Until AI becomes so advanced that is can self-replicate and set its own directives, without any consideration to humans, this will remain the case. If this does occur, science fiction will have become fact and humanity will have far more to fear.
Bibliography
Journals
Bethlehem, D, (ND). Principles relevant to the scope of a state’s right of Self-Defense against an imminent or actual armed attack by Nonstate actors. The American Journal of International Law, Vol.106:000 Available at : https://www.un.org/law/counsel/Bethlehem%20-%20Self-
[Accessed 15 Feb. 2020].
Evangelista, M. (2016). Is War Too Easy? Perspectives on Politics, 14(1), 132-137. doi:10.1017/S1537592715003278
[Accessed 15 Feb. 2020].
Walsh, J. (2015). Political accountability and autonomous weapons. Research & Politics, [online] 2(4), p.205316801560674. Available at: https://journals.sagepub.com/doi/pdf/10.1177/2053168015606749. [Accessed 15 Feb. 2020].
Websites
Associated Press (2019). Major Saudi Arabia oil facilities hit by Houthi drone strikes. [online] the Guardian. Available at: https://www.theguardian.com/world/2019/sep/14/major-saudi-arabia-oil-facilities- hit-by-drone-strikes [Accessed 15 Feb. 2020].
Boulanin, V. and Verbruggen, M. (2017). MAPPING THE DEVELOPMENT OF AUTONOMY IN WEAPON
SYSTEMS. [online] Sipri.org. Available at: https://www.sipri.org/sites/default/files/2017- 11/siprireport_mapping_the_development_of_autonomy_in_weapon_systems_1117_1.pdf [Accessed 15
Feb. 2020].
HRW (2012). Losing Humanity | The Case against Killer Robots. [online] Human Rights Watch. Available at: https://www.hrw.org/report/2012/11/19/losing-humanity/case-against-killer-robots [Accessed 15 Feb. 2020].
Lage Dyndal, G., Arne Berntsen, T. and Redse Johansen, S. (2017). Autonomous military drones: no longer science fiction. [online] NATO Review. Available at: https://www.nato.int/docu/review/articles/2017/07/28/autonomous-military-drones-no-longer-science- fiction/index.html [Accessed 15 Feb. 2020].
Haberman, M. (2020). Trump Threatens Iranian Cultural Sites, and Warns of Sanctions on Iraq. [online] Nytimes.com. Available at: https://www.nytimes.com/2020/01/05/us/politics/trump-iran-cultural-sites.html [Accessed 12 Feb. 2020].
Hao, K. (2019). This is how AI bias really happens—and why it’s so hard to fix. [online] MIT Technology Review. Available at: https://www.technologyreview.com/s/612876/this-is-how-ai-bias-really-happensand- why-its-so-hard-to-fix/ [Accessed 15 Feb. 2020].
Holley, P. (2018). Elon Musk’s nightmarish warning: AI could become ‘an immortal dictator from which we would never escape’. [online] Washington Post.
Available at: https://www.washingtonpost.com/news/innovations/wp/2018/04/06/elon-musks-nightmarish- warning-ai-could-become-an-immortal-dictator-from-which-we-would-never-escape/ [
Accessed 12 Feb. 2020].
Knight, W. (2019). Military artificial intelligence can be easily and dangerously fooled. [online] MIT Technology Review. Available at: https://www.technologyreview.com/s/614497/military-artificial- intelligence-can-be-easily-and-dangerously-fooled/ [Accessed 12 Feb. 2020].
Kravchenko, S. (2019). Bloomberg - Are you a robot?. [online] Bloomberg.com. Available at: https://www.bloomberg.com/news/articles/2019-09-29/future-of-genetically-modified-babies-may-lie-in- putin-s-hands [Accessed 12 Feb. 2020].
McRae, M. (2017). Experts Think This Is How Long We Have Before AI Takes All of Our Jobs. [online] ScienceAlert. Available at: https://www.sciencealert.com/experts-think-this-is-how-long-we-have-before- ai-takes-all-of-our-jobs [Accessed 15 Feb. 2020].
Niruthan, N. (n.d.). Beyond Human: Rise of the Super-Soldiers – A Primer | Small Wars Journal. [online] Smallwarsjournal.com. Available at: https://smallwarsjournal.com/jrnl/art/beyond-human-rise-super- soldiers-primer [Accessed 12 Feb. 2020].
Sabbagh, D. (2020). Targeted killings via drone becoming 'normalised' – report. [online] the Guardian. Available at: https://www.theguardian.com/politics/2020/jan/19/military-drone-strikes-becoming- normalised-says-report?CMP=Share_iOSApp_Other [Accessed 12 Feb. 2020].
Serle, J. (2016). Obama drone casualty numbers a fraction of those recorded by the Bureau. [online] The Bureau of Investigative Journalism. Available at: https://www.thebureauinvestigates.com/stories/2016-07- 01/obama-drone-casualty-numbers-a-fraction-of-those-recorded-by-the-bureau [Accessed 15 Feb. 2020].
Tisdall, S. (2018). Banned 21 years ago, yet still the world stands by as chemical attacks go unchecked. [online] the Guardian. Available at: https://www.theguardian.com/world/2018/aug/11/chemical-weapons- syria-russian-us-sanctions [Accessed 15 Feb. 2020].
Tisdall, S. (2019). The nuclear arms race is back … and ever more dangerous now. [online] the Guardian. Available at: https://www.theguardian.com/world/2019/aug/17/nuclear-arms-race-is-back-and-more- dangerous-than-before [Accessed 12 Feb. 2020].
Videos
Putin, V. (2017) Russia : Putin warns a GM human could be ‘worse than nuclear bomb’
https://www.youtube.com/watch?v=E6pD96RRIUQ
[Accessed 12 Feb. 2020].