Modernising defence must respect international humanitarian law

The integration of artificial intelligence and autonomous systems is essential to ensuring that Australia is capable of defending its interests now and into the future, but we as a country must be careful not to abandon our humanity inadvertently in the race to modernise.

To ensure this does not happen, Australia needs better safeguards to ensure compliance with international humanitarian law as we develop new defence capabilities through emerging technologies.

Some in the technology industry have previously encouraged a ‘move fast and break things’ approach, emphasising speed and innovation above all else. This approach has seen human decision-making and intelligence increasingly replaced by algorithms and autonomous systems.

There are particular risks when this occurs in the military context. Humans must remain at the centre of our defence capabilities, particularly when strategic decisions are made that are literally life-or-death decisions. Technologies that take human decision-makers ‘out of the loop’ raise important questions about how these decisions are made, and who is accountable for them.

Lethal autonomous weapons systems (LAWS) can generally be understood as weapons that independently select and attack targets without human supervision.

The idea of drones patrolling the skies, identifying and executing targets reads as science fiction, but as early as 2003, the US first predicted that AI and facial recognition technologies (FRT) would be used with limited human supervision to execute lethal attacks during military operations.

One of the first examples of LAWS being used was in 2020, in Libya’s civil war, when forces backed by the government in Tripoli were believed to have used STM Kargu-2 drones to attack retreating enemy soldiers, according to a United Nations report. Since then, LAWS have been used in the Ukraine-Russia war and Russia is known to be developing a nuclear capable LAWS called ‘Poseidon’.

Understandably, many people oppose the idea of machines making life-and-death decisions.

As part of our recent evidence to an enquiry by federal Parliament’s Joint Standing Committee on Foreign Affairs, Defence and Trade (JSCFADT) into the modernisation of Australia’s defence capabilities, the AHRC dedicated the whole of our 38-page submission to discussing the human rights and international humanitarian law concerns raised by Australia’s current approach to LAWS.

A key concern is that LAWS are incompatible with the jus in bellum principles of proportionality and distinction.

Under the Geneva Convention, attacks must be proportionate—that is, the harms caused by an attack must be outweighed by the perceived advantages gained. This subjective analysis is inherently a difficult and imprecise one for humans to undertake. It requires a weighing of the value of a human life.

To allow an algorithm to conduct this weighing exercise has been described by United Nations Secretary-General António Guterres as ‘politically unacceptable and morally repugnant’.

Equally as important is the principle of distinction, which prohibits the targeting of civilians or use of indiscriminate attacks as combatants must seek to minimise the impact of conflict.

While AI and FRT are becoming more advanced every day, they are currently incapable of determining whether a person is hors de combat (which means they have surrendered or are so badly wounded that they are incapable of defending themselves). That type of contextual interpretation is simply beyond the capabilities of these technologies.

This could result in combatants who are hors de combat being killed by a LAWS that cannot make this distinction. The problem is further complicated by the rise in ‘grey zone’ and irregular conflict, in which combatants are not easily identifiable or easily distinguished from civilians.

Despite these significant risks, Australia currently has insufficient safeguards in place to ensure compliance with international humanitarian law as the ADF continues to evolve its capabilities by integrating emerging technologies.

The Department of Defence, in its own submission to the JSCFADT, noted that under Article 36 of the Additional Protocol I of the Geneva Conventions, new weaponry must undergo a legal review.

The Australian Government’s position appears to be that these reviews are sufficient to comply with international humanitarian law. But  Article 36 reviews have been widely criticised across the globe due to their inflexibility, lack of accountability and lack of compliance mechanisms. These concerns will be exacerbated if LAWS are, as expected, able to ‘learn’ from new data and missions. This could lead to Article 36 reviews approving a technology that may then operate differently after being deployed—rendering the previous review redundant.

Article 36 reviews are a necessary, but not a sufficient, safeguard with respect to LAWS.

It is due to these insufficiencies that many groups, including the AHRC, are calling for the regulation of LAWS.

There has recently been positive movement on this front, with Australia voting in favour of a UN resolution that stressed the ‘urgent need for the international community to address the challenges and concerns raised by autonomous weapons systems’.

However more must be done.

Australia should reconsider its 2018 position that it is premature to regulate LAWS. There are some military technologies, such as landmines and cluster munitions, that have been recognised as posing too great a threat to human rights and international humanitarian law, and therefore requiring regulation. It is our view that LAWS fall into the same category.

As Australia seeks to improve its defence capabilities, we must ensure that the swift pace of modernisation does not result in human rights and international humanitarian law being left behind.