UK’s Refusal to Sign the AI Action Summit Declaration: Strategic Sovereignty or Missed Opportunity?
Just weeks after the UK Government announced its AI Opportunities Action Plan (AOAP) with bold ambitions for AI sovereignty, it has taken a step that signals a clear stance on global AI governance, declining to sign the AI Action Summit Declaration alongside the United States. This move, while consistent with the UK’s strategic positioning in AI, raises fundamental questions about the balance between sovereignty and collaboration in shaping AI’s future.
The AI Action Summit, co-chaired by France and India, brought together over 100 nations and key industry players to discuss an inclusive and sustainable approach to AI. The resulting declaration emphasized global cooperation, AI accessibility, and ethical governance, aligning with frameworks such as the UNESCO Recommendation on Ethics of AI and the Global Digital Compact. Yet, the UK and the US, two of the world’s AI powerhouses, opted out. Why?
The Case for and Against Global AI Governance
The UK’s decision aligns with its sovereign AI ambitions, that I have written about previously and it raises the concerns I also flagged there about international cooperation. In brief:
The Argument for Sovereignty: AI is a competitive asset, and ensuring control over infrastructure, data, and regulation could position the UK as a leader rather than a follower in AI development. Avoiding broad international commitments may provide the flexibility needed to innovate without bureaucratic constraints.
The Argument for Collaboration: AI governance is inherently a global challenge. From AI safety to ethical frameworks, international coordination is crucial in addressing risks such as bias, misinformation and the concentration of AI power in a few dominant players. Opting out of collaborative agreements risks isolation at a time when partnerships could be beneficial for UK businesses seeking to scale globally.
JD Vance and the US Perspective
The US refusal to sign the declaration provides critical context for the UK’s decision. JD Vance’s speech at the AI Summit articulated a vision of AI governance rooted in minimal regulation, economic expansion and national security. Key takeaways include:
Economic and Technological Leadership: Vance stressed that AI is an industrial revolution-level breakthrough and excessive regulation would stifle innovation. The US aims to maintain its dominant position by ensuring AI remains an open and deregulated industry.
Regulatory Resistance: The Trump administration is wary of international AI governance frameworks that could constrain American firms, particularly given prior experiences with EU regulations like GDPR and the Digital Services Act, which the US sees as burdensome.
National Security and Ideological Control: The speech emphasized keeping AI free from ideological bias (a point that I think begs revisiting) and ensuring American-made AI is not weaponized by adversaries. The administration’s focus on maintaining domestic control over AI infrastructure, including semiconductor manufacturing, mirrors the UK's sovereignty-driven approach.
Industry and Regulatory Implications
From a legal and regulatory standpoint, the decision raises key questions:
Will the UK and US’s independent stance make it harder for businesses to collaborate internationally?
Could regulatory divergence with the EU and other global players create friction in cross-border AI trade?
How does this decision affect the US and UK’s ability to influence global AI standards if it remains outside key agreements? Does the US not participating make the declaration a little, dare I say it, toothless?
The Road Ahead: Balancing Influence and Isolation
I personally believe that the UK's destiny has to lie on a different path to the US here. JD Vance's message was clear and has no room for innovation outside of the US. However, the UK must determine its own trajectory, balancing its ambitions for AI sovereignty with meaningful engagement in international AI governance.
The next major AI policy milestones, including the Kigali Summit and the World AI Conference 2025, will provide further opportunities for alignment or divergence. The UK must decide how much influence it wants in shaping global AI norms and whether staying outside multilateral agreements will ultimately serve its interests.
Maybe the UK’s refusal to sign the AI Action Summit Declaration is a calculated step in asserting its AI sovereignty. It does align with the AOAP’s vision of an independent AI ecosystem after all, but this does come, as anticipated, with inherent trade-offs in terms of international cooperation. I have talked about the geopolitics of AI previously and we are seeing here clear signs of countries prioritizing economic and strategic dominance over global governance frameworks. The challenge now is to ensure that this sovereignty does not turn into isolation. This is especially crucial at a time when maintaining a collaborative spirit is arguably the most necessary consideration for responsible and inclusive AI development.