Collaboration, not competition, should shape AI governance
24 Oct 2025|

Is there a finish line for the AI race? The relentless pursuit of artificial intelligence is reshaping our world, challenging our ethics, and redefining what it means to be human. But what does the AI race mean for geopolitics? And what are we racing for or towards?

The US government’s 10 July plan, titled ‘Winning the AI Race: America’s AI Action Plan’, frames AI development as a zero-sum competition, echoing Cold War-era winner-takes-all capitalism. Yet, this mindset is ill-suited for a technology that demands cross-sector collaboration, cultural sensitivity and ongoing ethical reflection. Portraying AI innovation as a race with clear winners and losers risks amplifying fear and division, undermining long-term leadership and global progress on AI innovation.

Historically, the AI race is often compared to the Cold War space race between the United States and the Soviet Union, a contest that spurred rapid innovation but also heightened geopolitical tensions. The space race culminated in landmark achievements such as the Moon landing and fostered international agreements such as the 1967 United Nations Outer Space Treaty, which declared space the ‘province of all mankind’. However, today’s rhetoric reflects a retreat from that cooperative spirit, with renewed strategic nationalism emphasising dominance over shared progress.

This competitive framing obscures the complex realities of AI innovation. Unlike the space race, which had clear milestones and endpoints, AI development is continuous, diffuse and accelerating. There is no definitive finish line or singular achievement that signals victory. The concept of artificial general intelligence, which is often described as a kind of end goal for AI, remains debated and undefined, underscoring the evolving, iterative nature of AI progress.

Moreover, global governance of AI is fragmenting as major powers pursue divergent regulatory paths. The European Union’s risk-focused approach aims to set global safety standards but may struggle to keep pace with rapid innovation. China’s state-led model combines centralised control with industrial scaling but is not exportable elsewhere. Meanwhile, the US leads in frontier AI development but suffers from fragmented regulation itself and lacks any unified strategy at the federal level.

Export controls and overlapping regulations further complicate the landscape. US export restrictions on advanced AI chips ended up spurring China to innovate, resulting in models such as DeepSeek RI while also prompting the launch of the EU’s European Chips Act, creating overlapping compliance demands for businesses. Domestically, many federal agencies run parallel AI trustworthiness initiatives, reflecting the broader challenge of balancing innovation with regulation even within the US.

The debate over open-source AI highlights contrasting governance philosophies. China promotes open-source AI to foster inclusivity and reduce reliance on US technology, while US leaders advocate caution, with some recommending restrictions on releasing frontier model weights. These tensions illustrate the strategic and ethical complexities shaping AI’s future.

To navigate this rapidly evolving landscape, we must reconsider the narratives framing AI innovation. Western-centric stories often marginalise non-Western voices and experiences, perpetuating a skewed understanding of AI’s global impact. Governments in the Global South are asserting digital sovereignty and crafting AI policies tailored to local priorities, emphasising pluralistic governance rooted in diverse cultural and economic contexts.

With all this happening, inclusive collaboration is not just a moral imperative; it’s a strategic advantage. AI systems trained on narrow datasets risk poor performance and unintended harms when deployed globally. Fragmented standards exacerbate challenges such as transparency, human oversight and security risks. Therefore, broad participation in AI design and governance ensures systems better reflect the needs and values of the diverse populations these systems are meant to serve.

The internet and GPS exemplify the power of global collaboration. Both emerged from collective efforts spanning continents and disciplines, evolving into foundational technologies that benefit humanity widely. Similarly, building a globally inclusive AI ecosystem requires investing in digital infrastructure, education and representative data, alongside institutions that empower underserved regions.

Regional initiatives such as the African Union’s AI and Data Policy Framework and the Association of Southeast Asian Nations’ AI governance working group demonstrate promising models for cross-border cooperation. US bipartisan proposals for AI for Good funding echo this commitment to inclusivity. Embracing diverse ethical frameworks from Ubuntu in Africa to Buddhist relational thinking in East Asia can enrich governance approaches, making them more contextual and culturally attuned.

Ultimately, shaping AI’s future demands moving beyond zero-sum competition to embrace shared responsibility and opportunity. By fostering global inclusivity, we can unlock AI’s potential as a tool for empowerment rather than division, ensuring its benefits are broadly and equitably shared. These are profound times, calling for visionary leadership that values collaboration over rivalry and recognises the vital contributions of all humanity in imagining and building the future of intelligent systems.

 

This is a version of an article first published in National Interest. It is part of the Stimson Center’s Red Cell project.