AI, arms control and the new cold war
16 Nov 2023| and

So far, the 2020s have been marked by tectonic shifts in both technology and international security. Russia’s attack on Ukraine in February 2022, which brought the post–Cold War era to a sudden and violent end, is an obvious inflection point. The recent escalation in the Middle East, which may yet lead to a regional war, is another. So too the Covid-19 pandemic, from which the United States and China emerged bruised, distrustful and nearer to conflict than ever before—not least over the vexing issue of Taiwan, a stronghold in the world of advanced technology.

Another, less dramatic but equally profound moment occurred on 7 October 2022, when US President Joe Biden’s administration quietly unveiled a new policy overseen by an obscure agency. On that day, the Bureau of Industry and Security (BIS) at the US Department of Commerce announced new export controls on advanced computing chips and semiconductor manufacturing items to the People’s Republic of China. Mostly unnoticed by those outside a few speciality areas, the policy was later described by some as ‘a new domain of non-proliferation’ or, less kindly, as an escalation in ‘an economic war against China’.

The BIS announcement came just months before the latest platforms of generative artificial intelligence, including GPT-4, burst onto the world stage. In essence, the White House’s initiative aimed to prevent China from acquiring the physical materials needed to dominate the field of AI: the highly specialised semiconductors and advanced computing chips that remained in mostly Western and Taiwanese hands.

When coupled with an industrial policy that aimed to build domestic US semiconductor manufacturing, and a strategy of ‘friend-shoring’ some of Taiwan’s chip industry to Arizona, this amounted to a serious attempt at seizing the ‘commanding heights’ of AI. In July this year, Beijing responded by restricting exports of germanium and gallium products, minor metals crucial to the semiconductor industry.

Designers of AI platforms have argued that novel large-language models herald a new epoch. The next iterations of AI—GPT-5 and beyond—might usher in a future of ‘radical abundance’ that frees humanity of needless toil, but could equally lead to widescale displacement and destruction, should an uncontrollable ‘superintelligence’ emerge. While these scenarios remain hypothetical, it is highly likely that future AI-powered surveillance tools will help authoritarian governments cement control over their own populations and enable them to build new military–industrial capabilities.

However, these same AI designers also admit that the current AI platforms pose serious risks to human security, especially when they’re considered as adjuncts to chemical, biological, radiological, nuclear and high-consequence explosive (CBRNE) weapons. We, the authors of this article, are currently investigating how policymakers intend to address this issue, which we refer to as ‘CBRNE+AI’.

This more proximate threat – the combination of AI and unconventional weapons—should oblige governments to find durable pathways to arms control in the age of AI. How to get there in such a fractious geopolitical environment remains uncertain. In his recent book, The coming wave, Deep Mind co-founder Mustafa Suleyman looks to the 20th-century Cold War for inspiration. Nuclear arms control, and the lesser-known story of biological arms control, provide hopeful templates. Among Suleyman’s suggestions is the building of international alliances and regulatory authorities committed to controlling future AI models.

We recently suggested that the Australia Group, founded during the harrowing chemical warfare of the Iran–Iraq war, may be the right place to start building an architecture that can monitor the intersection of AI and unconventional weapons. Originally intended to obstruct the flow of precursor chemicals to a distant battlefield in the Middle East, the Australia Group has since expanded to comprise a broad alliance of countries committed to harmonising the regulation of components used in chemical and biological weapons. To the group’s purview should be added the large-language models and other AI tools that might be exploited as informational aids in the construction of new weapons.

Former US secretary of state Henry Kissinger recently called for Washington and Beijing to collaborate in establishing and leading a new regime of ‘AI arms control’. Kissinger, and his co-author Graham Allison, argue that both the US and China have an overriding interest in preventing the proliferation of AI models that could extinguish human prosperity or otherwise lead to global catastrophe. But the emerging dynamics of a new cold war will demand a difficult compromise: can Washington realistically convince Beijing to help build a new architecture of non-proliferation, while enforcing a regime of counter-proliferation that specifically targets China? It seems an unlikely proposition.

This very dilemma could soon force policymakers to choose between two separate strains of containment. The October 2022 export controls are a form of containment in the original Cold War sense: they prevent a near-peer competitor from acquiring key technology in a strategic domain, in a vein similar to George Keenan’s vision of containment of the Soviet Union. Suleyman, however, assigns a different meaning to containment: namely, it is the task of controlling the dangers of AI to preserve global human security, in much the same way biological, chemical and nuclear weapons are (usually) contained. For such an endeavour to work, China’s collaboration will be needed.

This week, US and Chinese leaders are attending the APEC summit in San Francisco. It is at this forum that Kissinger suggests they come together in a bid to establish a new AI arms control regime. Meanwhile, campaign season is heating up in Taiwan, whose citizens will soon vote in a hotly contested election under the gaze of an increasingly aggressive Beijing. More than a month has passed since Hamas opened a brutal new chapter in the Middle East, and the full-scale war in Ukraine is approaching the end of its second year.

Whatever happens in San Francisco, the outcome could determine the shape of conflicts to come, and the weapons used in them. Hopefully, what will emerge is the outline of the first serious arms control regime in the age of generative AI, rather than the deepening fractures of a new cold war.