Should Australia be LAWSless?
29 Jun 2021|

Australia has traditionally been a norm-maker when it comes to arms control. A quick click through the pages on non-proliferation, disarmament and arms control on the Department of Foreign Affairs and Trade website reveals a country that has a strong commitment to responsible, lawful controls of weapons types ranging from small arms to nuclear bombs. Australia even took the lead in the final text of the Arms Trade Treaty, a breakthrough in conventional arms control, at least in theory. It entered into force during the Arab uprisings and the supersonic rise of Islamic State, and remains disappointingly aspirational rather than best practice.

Nonetheless, this active commitment to arms control is a great example of how Australia can build its reputation and influence in multilateral institutions in order to shape the norms and standards that help make our strategic environment safer and make catastrophic conflict less likely.

But emerging technologies like robotics and artificial intelligence are set to complicate arms control even further, and the international community is attempting to wrap its collective head around the implications of lethal autonomous weapons systems (LAWS).

Australia has an opportunity to play an integral role in putting controls on a set of technologies that could have devastating effects on global and national security. As the international environment becomes more adversarial, these norm-shaping skills will become critical for Australia.

But Australia’s current position actually embraces LAWS development. It argues that a treaty on LAWS development and deployment is premature because there’s no agreement on the definition of a LAWS. It asserts a core interest in developing AI technologies because of their potential for improving safety and reducing risks, and in fielding defensive autonomous systems.

This position mirrors that of some of Australia’s allies. The US also contends that it is ’premature’ to support a pre-emptive ban, and that it may be ‘compelled’ to develop fully autonomous weapons. The UK recently confirmed a similar position, stating that a ban could be ‘counterproductive’.

While Canada has hedged its position, New Zealand has recently clarified its support for an outright ban. In May, New Zealand’s minister for disarmament and arms control, Phil Twyford, said that the development and deployment of fully autonomous weapons creates ‘the potential for a continuous global battlefield’. He said he was ‘committed to building an alliance of countries working towards an international and legally binding instrument prohibiting and regulating unacceptable autonomous weapons systems’.

New Zealand joins 30 other states in supporting a ban. These states, most of which are in the diplomatic grouping known as the Non-Aligned Movement, have little chance of joining the lethal autonomy race and recognise that these new weapons will change the character of warfare to their likely disadvantage.

The systems in development promise to be faster than humans, be scalable at minimal cost and reduce risks in the war zone. However, they can also make mistakes and make escalation more likely. The critical question for many is whether a machine should be allowed to make life-and-death decisions.

Australia announced its official position on LAWS at a roundtable on the issue held at the ANU School of Law in March. Conducted under the Chatham House Rule, it provided a forum for freely discussing the challenges and opportunities that LAWS present for Australia.

Participants included academics, lawyers, political scientists, technologists and representatives from the Department of Defence and the private sector. Both serving and retired officers of the Australian Defence Force were present, as well as a former secretary of defence. It was the first event of its kind in Australia to address the technological, legal and ethical dimensions of LAWS.

A chair’s summary detailing the main topics addressed and outlining the key points of the day’s discussion was circulated to all participants in May. A version of this summary was also published in the ANU Journal of Law and Technology.

The summary identified four main themes in the discussions: Australia’s current position; definitions; international law and norms; and development, deployment and personnel. Reference was made to the ADF’s Concept for robotic and autonomous systems, which categorises military AI technologies on a spectrum from remotely operated systems to automatic systems, autonomic systems to autonomous systems.

It is the development of fully autonomous systems that some scholars and civil society groups, as well as the UN secretary-general, are concerned about. The removal of human control over a final decision means that a machine will be deciding about the use of lethal force. As South African legal scholar Christof Heyns put it, ‘While earlier revolutions in military affairs gave the warrior control over ever-more-powerful weapons, autonomous weapons have the potential to bring about a change in the identity of the decision-maker. The weapon may now become the warrior.’

The lack of agreement over the meanings of autonomy and automation was considered an obstacle to making progress. Civil society groups refer to ‘meaningful human control’ over LAWS; however, it’s not clear what ‘meaningful’ means in practice. Lethality was also raised, for while this term is  commonly used term in international forums, its relevance and centrality remain under discussion. The International Committee of the Red Cross, for example, doesn’t include the term ‘lethal’ in its position on LAWS. For some, whether a weapons system is lethal to humans is only one consideration.

Also discussed was the desirability and possibilities of predictability, both of the systems themselves and in combat, including how machines fail, how they can be fooled, and how predictability in combat makes defending against them easier.

But countries should also consider the growing public alarm about autonomous weapons systems, and the effect of this alarm on national resilience and social cohesion. For example, one participant noted that thousands of technologists and others, including Elon Musk, Stephen Hawking and Jack Dorsey, have signed an open letter which states, ‘[A] military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.’

The discussion also touched on the question of whether giving machines the power to decide to take a human life would violate the core principles of proportionality, distinction, targeting and human dignity set out in international humanitarian law. The need for new black-letter law and norms of behaviour was discussed at length. For example, can autonomous lethality can be dealt with under the framework of the Convention on Conventional Weapons, which includes 11 guiding principles?

It was also noted that the Martens clause, which, in the absence of treaty protection, protects people who find themselves in battle zones under the principles of humanity and ‘the dictates of human conscience’, may negate the need for new laws.

But in May, in an important development since the roundtable was convened, the ICRC updated its position. It now recommends that states adopt new legally binding rules on LAWS.

On the issue of compliance, parallels were drawn with the global normative acceptance by states of the Comprehensive Nuclear-Test-Ban Treaty and general consensus on non-deployment of biological and chemical weapons.

Participants also noted precedents for pre-emptive bans on exploding bullets and blinding lasers, as well as with the civil society campaigns that led to bans on anti-personnel landmines and cluster munitions—many of whom now also comprise the Campaign to Stop Killer Robots. No clear agreement was reached, though the remarkably civil discussions found more commonality than anticipated.

In reflecting on the roundtable, if Australia wants to develop defensive LAWS capabilities, what might this look like in practice? Autonomous sentries across our northern borders replacing NORFORCE? Could Australia station LAWS with the expansion of the US presence in the Northern Territory? Might we deploy loitering and suicide munitions using dubious face recognition in new expeditionary conflict zones?

These sentry and loitering capabilities are already in use in the Middle East, developed and deployed by Israel and Turkey. A report emerged late last month that a Turkish weaponised drone seemingly made its own decision to target a human. Yet even if their tactical and operational use keeps a human in the final decision to fire, and arguably held to account, the contribution of new technologies to strategic success needs to be considered carefully; what will be the real legacy of almost two decades of drone strikes in Afghanistan?

Political and civil-society support for an outright ban is likely to grow, especially if levels of trust in governments continue to be low. Australia needs to think about whether it wants to encourage a LAWS arms race, given questions over its ability to compete with the scale of systems being developed by potential adversaries. The ANU LAWS roundtable was a good start, but there’s a long way to go before Australia fully understands the implications of its current position.