Governing AI in the global disorder
2 Apr 2024|

It’s a truth universally acknowledged that finding consensus on anything in the international system is difficult at the best of times, let alone in this age of geopolitical fracture, ideological contest and ‘permacrisis.’

Yet the United Nations General Assembly took a historic step in March by unanimously adopting the world’s first-ever UN resolution on artificial intelligence.

Proposed by the United States, and co-sponsored by more than 120 nations, including China, the resolution focused on AI safety and the development of ‘safe, secure, and trustworthy’ AI in line with the UN Charter and the Universal Declaration of Human Rights. The resolution reportedly took months of diplomacy by the US and, while not legally binding, represents a crucial first step toward fostering some kind of global cooperation on responsible AI development.

Indeed, in response to the rapid recent advances, AI is at the top of the UN agenda.

Last year, UN Secretary-General Antonio Guterres convened a new High Level Advisory Board on AI to provide urgent recommendations on international AI governance. The board’s work will feed into the negotiations on the UN Pact for the Future and the accompanying Global Digital Compact, which will be announced at the UN’s Summit of the Future in September this year. Together these will set out the international community’s approach to challenges arising from AI and other digital technologies.

Last July, the UN Security Council also held its first formal meeting on AI to discuss its implications on international peace and security. Guterres has backed calls from some countries and tech figures to establish a global AI treaty or new UN body to govern AI. The Secretary-General has also encouraged nations to engage in multilateral processes around the military applications of AI and to agree on global frameworks for the governance of AI.

This momentum builds on a number of UN processes and forums that have been considering how best to govern and regulate AI as far back as 2013.

Yet multilateralism has been in crisis now for many years—and even more so as the world becomes dangerously unstable and increasingly fragmented. With AI increasingly affecting our economies, societies, communications and security, the debates on how to govern AI go to the heart of the ideological competition that is reshaping the global order.

To get around the growing fragmentation among nations—coupled with the UN’s challenges in establishing quick and effective governance mechanisms at the best of times—there’s a rise in minilateral and other initiatives on AI as nations race to ensure rules on AI reflect their own values and interests.

Democracies are particularly keen to set the rules for AI. The UK’s AI Safety Summit in November was the first global initiative that brought together governments, leading AI companies, civil society groups, and research experts to deliberate on the risks and potential benefits of AI. One of the summit’s noteworthy outcomes was the Bletchley Declaration, a joint statement endorsed by 28 countries, including the United States, the United Kingdom, India, the European Union and even China. The declaration affirmed AI developers’ responsibility for ensuring the safety and security of their systems, committed to international co-operation in AI safety research, and called for the establishment of common principles for AI development and deployment. Follow-up summits will be held in South Korea and France later this year.

This builds on other work that democracies are doing to get out in front and shape global AI governance. G7 leaders released the International Guiding Principles on Artificial Intelligence and a voluntary Code of Conduct for AI developers in October, marking the culmination of the G7 Hiroshima AI Process. The Quad released its own principles on AI in 2024. Meanwhile the European Union’s AI Act—officially endorsed by the EU Parliament a few weeks ago—will establish the world’s first comprehensive framework for regulating AI development and use, focusing on risk assessment, human rights and transparency.

While democracies use these minilateral and multistakeholder initiatives to chart a course towards responsible and ethical AI governance, China is also advancing its own vision of AI governance—one that prioritises government control over individual rights—through its Global AI Governance Initiative (GAIGI).

Launched by President Xi Jinping last October—and still in its early stages—it’s clear the GAIGI represents China’s efforts to shape the global AI landscape in line with its own political and ideological interests. It shows an obvious intent to promote this system as an alternative to US or Western-supported AI governance frameworks.

Yet while Western counties and likeminded democracies are focused on writing the rules of the road for AI, China is also building the road itself by exporting Chinese-made AI eco-systems around the world. ASPI’s Mapping China’s Tech Giants research has shown how China’s Digital Silk Road has served as an important vehicle for exporting Chinese technology, standards and digital authoritarianism to other nations. This is the same with AI. With Chinese AI technology dominating markets around the world, Chinese AI governance frameworks become the default on the ground.

In a way this highlights the challenges of establishing unified global AI governance frameworks in a fragmenting world.

With nations gravitating towards AI governance models that align with their existing political and social systems, we are likely to see an increasingly fragmented global AI landscape emerge, with different regions and blocs adhering to distinct rules and norms. The free and open internet is already under strain, and AI has the potential to turbocharge this fragmentation. This poses significant risks, potentially hindering international cooperation, exacerbating existing geopolitical tensions, and creating barriers to innovation – let alone the impact on human rights and freedoms in different parts of the world.

This is why, despite the UN’s inherent challenges, multilateral efforts such as last month’s General Assembly resolution to govern AI remain essential. The UN, with its inclusive platform that brings together diverse voices from governments, civil society, academia, and the tech industry, provides a unique forum for global dialogue on AI governance. While the UN may not ever be able to mandate a single global AI governance framework, it can play a crucial role in setting minimum standards, fostering consensus on core principles and facilitating interoperability between different technological blocs, ensuring that AI is developed and deployed responsibly for the benefit of everyone.

This is more important than ever, and last month’s resolution is a good start.