- The Strategist - https://www.aspistrategist.org.au -

Killer robots? Getting LAWS right

Posted By on June 9, 2015 @ 12:30

Percy the robot


Technology is steadily marching in the direction of increased autonomy, a change that will undoubtedly influence weapon platforms in the future. The notion of offensive use of lethal autonomous weapon systems (LAWS)—systems that can independently identify targets and take lethal action—has already stirred disquiet in the international community (even though no such capability exists). While discussion of the legal and ethical ramifications of LAWS is welcome and crucial, it often gets confused and tangled around the technicalities of autonomous systems and artificial intelligence (AI). The “killer robots” rhetoric could stifle valuable technological advances that might produce greater precision and discrimination.

Attention around LAWS skyrocketed in 2012 when Human Rights Watch released Losing humanity: the case against killer robots, arguing for a legal ban on the development of fully autonomous weapons and for the creation of a code of conduct for R&D of autonomous robotic weapons. The report spurred the 2014 UN Convention on Certain Conventional Weapons (CCW) Meeting of Experts, which convened again last month.

There’s also concern about LAWS closer to home. At a Senate Committee hearing last month on the use of unmanned platforms by the Australian Defence Force,  witnesses from the Red Cross raised issues about the development of fully-autonomous systems and the capacity of these systems to discriminately target. (You can read the testimonies to the committee here (PDF), including my contribution with Andrew Davies.)

It’s a great sign that the CCW and other bodies are anticipating the challenges posed by LAWS. The US stirred up serious consternation when they first deployed Predators with Hellfire missiles after 9/11, but there were no meetings of experts or inquiries beforehand. A decade on from the first lethal drone strikes, concerns about lethal unmanned aerial vehicles persist despite consensus that the technology doesn’t contravene international humanitarian law. But a bad reputation is hard to shake, and LAWS have already been saddled with the “killer robot” label. The provocative branding has started an important conversation about the extent to which the world is comfortable with autonomous targeting.

But budding discussions on the potential legal and normative challenges of LAWS don't clearly define what LAWS actually are—the UN's still without an official definition. This creates confusion as to whether to include capabilities such as missile defence systems that autonomously identify and destroy incoming missiles and rockets. There’s also a complex and evolving spectrum of technological autonomy to take into account. On one end, there’s technology in use today with autonomous functions—like missile defence systems. At the other end, there are systems that have advanced 'reasoning' and adaptive problem solving skills, which could more accurately be defined as artificially intelligent rather than autonomous. Systems with human-like reasoning skills don’t yet exist but they’re certainly on the agenda of research groups like DARPA.

Confusion on this subject is in large part created by the novelty of autonomous systems and AI. While we're only in the early stages of development, general unease is reflected in the blanket bans proposed by Human Rights Watch, along with other initiatives like the Campaign to Stop Killer Robots. These groups assume that LAWS will undermine international humanitarian law (IHL) and challenge the status of civilians in warfare since they would lack the human judgement and decision making. But there’s nothing in IHL currently that states that only a human can make lethal decisions, nor any reason to suggest that those systems won’t eventually be capable of distinguishing between civilians and lawful targets at least as well as humans can.

As Kenneth Anderson and Matthew Waxman have argued, LAWS of the future might actually be more discriminate and proportionate as weaponry. The processing speed possible for LAWS and their ability to remain on station for extended periods without interruption could lead to greatly enhanced battlefield awareness—'dumb' drones are already providing some of these benefits. There’s also the possibility that removing human emotions—those which can cloud decision-making—could result in fewer civilian casualties. A ban on R&D would suppress potentially ground-breaking developments.

There are many unknowns surrounding the future of autonomous systems and AI. The technology has a long way to go before we can field a system that’s capable of decision-making, reasoning and problem solving in a complex environment on par with a highly trained soldier. There’s also no guarantee that science will ever develop this level of AI. As Chris Jenks commented in his recent lecture on autonomous systems at the ANU, humans are tremendously poor predictors of the future, especially when it comes to technology.

For now, the international community should work to develop an accepted definition for LAWS. It needs to be flexible enough to account for the many unknowns, and capable of evolving to match the development of autonomous systems and AI. Establishing a definition will be challenging but it’s needed to advance the important dialogue around the laws and norms of potential offensive use of LAWS. The use of inflammatory labels like “killer robots” should be discouraged—they serve only to encourage falsehoods and engender confusion about LAWS.


Article printed from The Strategist: https://www.aspistrategist.org.au

URL to article: https://www.aspistrategist.org.au/killer-robots-getting-laws-right/

[1] Image: http://www.aspistrategist.org.au/wp-content/uploads/2015/06/3492854128_cbe45601ca_z.jpg

[2] independently identify: http://www.css.ethz.ch/publications/pdfs/CSSAnalyse164-EN.pdf

[3] Losing humanity: the case against killer robots: http://www.hrw.org/reports/2012/11/19/losing-humanity

[4] Meeting of Experts: http://www.unog.ch/80256EE600585943/(httpPages)/8FA3C2562A60FF81C1257CE600393DF6?OpenDocument

[5] Senate Committee hearing: http://www.aph.gov.au/Parliamentary_Business/Committees/Senate/Foreign_Affairs_Defence_and_Trade/Defence_Unmanned_Platform

[6] here: http://parlinfo.aph.gov.au/parlInfo/download/committees/commsen/139f0afd-7bfa-47d1-bc67-8ee6c8c1fde6/toc_pdf/Foreign%20Affairs,%20Defence%20and%20Trade%20References%20Committee_2015_04_14_3385_Official.pdf;fileType=application%2Fpdf#search=%22committees/commsen/139f0afd-7bfa-47d1-bc67-8ee6c8c1fde6/0000%22

[7] on the agenda: http://www.artificialbrains.com/darpa-synapse-program

[8] Campaign to Stop Killer Robots: http://www.stopkillerrobots.org/learn/

[9] argued: http://www.unog.ch/80256EDD006B8954/(httpAssets)/702327CF5F68E71DC1257CC2004245BE/$file/LawandEthicsforAutonomousWeaponSystems_Whyabanwontworkandhowthelawsofwarcan_Waxman+anderson.pdf

[10] recent lecture on autonomous systems at the ANU: https://law.anu.edu.au/events/anu-college-law/crossing-rubicon-path-offensive-autonomous-weapons