Weapons have had some capacity for decision-making since World War II, but now artificial intelligence is taking the capability to vastly greater levels and, before long, prevalence.
So far, this is looking highly problematic. Are AI systems accurate enough to be making decisions of life or death? They are not always right, and their accuracies differ in different circumstances.
The United States has kicked off moves towards regulation, with what is called the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy. Australia has joined in the declaration, too. But the world is still far from achieving any kind of international agreement.
The issue appears all the more pertinent now, after news in April that AI had been used in combat in Gaza. Israel applied an AI system called Lavender to identify targets.
AI has many different branches with different capabilities. Not all require data for learning, but the AI systems in the form we are hearing about these days mostly do. They learn to identify patterns. Since they work fast and generally with satisfactory levels of accuracy for civilian applications, we are moving towards using them in many different ways.
How do AI systems work? Developers feed them the bunches of data that they learn from. Then the systems are tested using other data that they haven’t seen before to see how they would react to a real situation. The higher their accuracy is, the better they respond to the tests.
To take a military example, suppose we want to develop an AI system to find and identify enemy soldiers. At first, we train it by showing it many cases with enemy soldiers in them. If the system is designed to work with video images, the data cases that we show it will be videos, though it will also have other information, such as the time of day. The process of training helps the system identify patterns to find the soldiers.
The system might learn to associate enemy soldiers with such characteristics as their uniforms, the long objects they often carry (rifles), their movement patterns, or their locations with respect to other identified objects. It does this with a series of mathematical formulas, and it improves itself by progressively minimising errors.
How far this is from the simple capabilities of such early decision-making weapons as homing torpedoes, which automatically changed course to follow noise from targets.
After an AI system has been put through many training iterations and has achieved acceptable performance in training assessments, we start rigorous testing.
In the example of a system that looks for soldiers, the testing would involve showing it a data set that includes what we know to be soldiers of a particular country. The accuracy we would expect from different systems is different; there is no standard number. It depends on how good the training and testing data are, how well we have designed and trained the system and, crucially for combat applications, the difficulty of the task. Military situations can be extremely complicated.
In civilian applications, a good AI expert might get 95 percent test accuracy, whereas a mediocre one could get 70 percent from the same training data.
But is 95 percent good enough for life-or-death decisions? What if the system mistakenly identifies a group of civilians as soldiers?
That could happen. The civilians’ clothing may be similar enough in colour to the khaki gear that appeared in the training sets, or it could have such features as stripes that the AI had learned to be highly suggestive. Digging a farm drainage ditch may look enough like digging a trench to the AI.
Looking for soldiers is just one potential military application of AI. Others may be more even complicated.
Consider, too, that in war the enemy is trying to confound your decision-making, whereas in most civilian applications AI is dealing with something that isn’t actively working against it.
Also, an enemy will change its behaviour in war as it gains experience. AI may have learned in peacetime to look out for some feature or behaviour that the enemy discards after a few weeks of combat.
There is no doubt that AI will become increasingly prominent in war. But policymakers need to be aware of its shortcomings as well as its capabilities.