- The Strategist - https://www.aspistrategist.org.au -

Regulating autonomous weapons

Posted By on November 16, 2017 @ 11:04

This week, the UN is meeting for a fourth time to discuss how ‘lethal autonomous weapons systems’ (LAWS) should be governed within the Convention on Certain Conventional Weapons. NGOs such as the Campaign to Stop Killer Robots [1] have continued to press for a ban such weapons. NATO-aligned powers (the UK and Australia) have resisted a ban as premature due to lack of agreed definitions. Many nations have repeated requests for clear definitions as to what exactly they’re being asked to ban.

Given the absence of agreed definitions, the Dutch in their working paper [2] suggested that people propose working definitions [2]. Here are mine. First, I agree with the definition of ‘autonomy’ that George Bekey offers in his book Autonomous robots [3]: ‘Autonomy refers to systems capable of operating in a real-world environment without any form of external control for an extended period of time.’

Second, the ‘critical functions’ of LAWS are defined as the three components of lethal targeting—defining a target class, selecting a target and engaging a target. Other functions such as take-off and landing, as well as navigation, may be autonomous, but no one argues that autonomy in those functions should be banned.

Given those working definitions, a LAWS is ‘fully autonomous’ if it can define, select and engage its targets with no external control. The ‘meaningful human control’ of LAWS that’s frequently called for should be interpreted to entail the involvement of a human operator. There are debates as to whether that requires a human being to be in the loop, on the loop or off the loop, and indeed how wide the loop should be.

Patriot anti-missile systems aren’t fully autonomous. Human programmers define their targeting criteria. Patriot systems can select targets autonomously, but by design require a human to press a ‘Confirm’ button before the system will engage. Patriot systems are thus one-third autonomous according to my working definitions.

For the purposes of drafting a ban [4], this ‘human in the loop’ line is very clear. However, it has the problem of excluding many weapons already in use.

Phalanx and C-RAM anti-artillery systems aren’t fully autonomous. They don’t define targeting criteria. However, once activated they can select and engage targets without human intervention. Humans can deactivate the system and hit an ‘Abort’ button to stop it firing, but if they do nothing the system is effectively two-thirds autonomous. That ‘human on the loop’ architecture gives the option—but not the guarantee—of ‘meaningful human control’ because the humans monitoring the system may be distracted, killed at their posts or flooded by data and unable to make sound decisions in the available time. In all of those situations, they wouldn’t be to stop the system.

Existing naval mines and anti-tank mines aren’t fully autonomous. They don’t define what to attack. Human programmers define acoustic signatures to which the systems respond. However, once deployed, the weapons select and engage targets without human operators. Some say that that makes such systems ‘fully autonomous’. On my working definitions, however, they are two-thirds autonomous.

The fictional Skynet in the Terminator films is fully autonomous. According to the apocalyptic story, Skynet decides to target humanity shortly after it becomes ‘self-aware’, and it does so without any human direction. It defines, selects and engages its targets with no human in or on the wider loop.

The Dutch argue that meaningful human control can be exercised within ‘the wider loop’—that is, within the defined critical functions on my working definitions. They claim that a LAWS can be under meaningful human control if humans retain control of the system’s targeting criteria. Assuming that the machine is well-designed and tested, on many missions it could then be trusted to select and engage without real-time human monitoring. There would be no ‘Confirm’ or ‘Abort’ buttons. Once activated, the LAWS would select and engage its targets with two-thirds autonomy.

Others argue that such a ‘set and forget’ approach falls outside the definition of meaningful human control.

So, which definition of meaningful human control should be adopted?

Mandating a human in the loop would effectively ban systems like Phalanx and C-RAM. It also would ban long-existing and widely deployed systems like anti-ship and anti-tank mines. One could bite the bullet and accept that ‘meaningful human control’ demands that those types of systems be withdrawn. Alternatively, one could exempt such ‘defensive’ systems from the ban.

An alternative is to agree that meaningful human control can be exercised in the wider loop, and to mandate that at least one human be involved in the wider loop. That would permit systems to autonomously select and engage targets without the delays caused by human cognition. As that would confer strategic advantage on technologically advanced states, they’ll be more likely to support it. Opposition from the AI community [5] will be brushed off.

In practice, however, it would be prudent to have more than one human in the wider loop. If operational tempo permits, having two or three humans in the loop—one who defines the targeting criteria, one who confirms a decision to engage, and perhaps one who confirms a target selection—is preferable to having only one person involved. Human–robot teams are less likely to make egregious targeting errors than purely robot or purely human teams.



Article printed from The Strategist: https://www.aspistrategist.org.au

URL to article: https://www.aspistrategist.org.au/regulating-autonomous-weapons/

URLs in this post:

[1] Campaign to Stop Killer Robots: http://www.stopkillerrobots.org

[2] working paper: http://undocs.org/ccw/gge.1/2017/WP.2

[3] Autonomous robots: https://mitpress.mit.edu/sites/default/files/titles/content/9780262025782_sch_0001.pdf

[4] drafting a ban: http://fbot.nz/ccw-protocol-vi

[5] AI community: https://www.cse.unsw.edu.au/~tw/letter.pdf

Copyright © 2024 The Strategist. All rights reserved.