The battlefield is no place for soldiers
21 May 2019|

The armistice that ended World War I was signed at around 5 am on 11 November 1918 but didn’t come into force until 11 am. In those six hours, when both sides knew that the war was over, there were another 10,944 casualties; 2,738 died. During that war some 40 million people were killed, but beginning a mere 21 years later World War II took at least another 60 million lives. The rate at which soldiers lost their lives in these industrial-era wars wasn’t unusual—throughout all of ancient and medieval history, tens of thousands to hundreds of thousands of soldiers would die in single battles over the course of a single afternoon.

Major war is generally outside the collective recollection of Western societies—and of their militaries—because we haven’t fought one in living memory. This means we’ve forgotten the absurd and terrible cost to soldiers of war. That cost is disproportionately borne by combat soldiers who— and this may be verified by asking any infantryman in human history—complete repetitive and tedious tasks under tremendous physical and psychological stress in circumstances of extreme physical and moral danger. Those who enjoy soldiering enjoy it in spite of what most of soldiering is: for the rare snatches of power, excitement and adventure.

As our society’s proximity to war has decreased, we have become more and more reliant on the arts to inform us of its nature. Movies, films, video games and even books written by soldiers tend to glamorise soldiering and glaze over its complexities, hardships and tedium because it makes for more accessible, interesting and engaging art. This has granted the armed forces, particularly combat arms, a mystique and prestige that mask a simple and obvious fact. No occupation as painful, tedious and dangerous as soldiering should be done by humans if there is a reasonable alternative.

Until recently there was no prospect of removing humans from battle. Until very recently we had absolutely no way to automate any aspect of human decision-making and so we could not develop autonomous weapons that could meet the entirely reasonable requirements that the laws of armed conflict set out for the use of military force—proportionality, distinction and military necessity. But we are now on the precipice of being able to produce artificial minds that not just meet those requirements to the level that a human soldier or officer may but exceed them in accuracy and consistency—because humans are very, very bad at acting ethically in wars.

In World War II, it was not AI but humans that committed war crimes and massacres in Le Paradis, Wormhoudt, d’Ardenne, Malmedy, Gardelegen, Marzabotto, Sant’Anna di Stazzema, Kefalonia, Oradour-sur-Glane, Lidice, Kalavryta, Distomo, Kragujevac, Warsaw, Vinkt, Heusden, Nanking, Hong Kong, Banka Island, Bataan, Parit Sulong, Laha, Palawan, Changjiao, Manila, Wake Island, Sandakan, Katyn, Nemmersdorf, Treuenbrietzen, Przyszowice, Biscari, Salina, Friesoythe and almost certainly numerous other locations that history did not record, perpetrated by forces we’d rather not have to finger.

It is not AI but humans which, with regularity bordering on routine, make errors of judgement in wars that kill allies and innocents in quantities that have often exceeded enemy combatant deaths.

Not only are we near the point of developing artificial minds that will act more ethically than humans, we are on the cusp of creating artificial minds that will almost certainly make and execute better tactical decisions more quickly and with fewer errors than any human could possibly rival.

One day soon it will be as suicidal to send human riflemen to fight drone riflemen as it was for horse-mounted Polish cavalry to charge German panzer divisions in 1939, or as pointless as pitting our best human grandmasters against our best chess engines today. Eliminating human combatants in the air and sea domains will probably be possible sooner than in the cluttered land domain, but all three will almost certainly be possible within the lifetime of every person reading this, and probably sooner than most of us think.

It is fashionable to call for international treaties to regulate the use of AI for military purposes to mitigate potentially existential risks. If this is to be the case, it must be done in ways that recognise the nearly absolute moral and practical imperatives to remove frail and suffering humans, their slow and flawed decisions, and their terrible ethical record from future battlefields. Our efforts to regulate autonomous weapons must be aimed at ensuring that their artificial minds are more ethical and less capable of atrocity than the humans they will replace, not to prevent them from replacing humans.

War is an inherently human activity, but there’s no good reason for combat to remain so indefinitely. After all, we’re only human.