Paris AI Action Summit: Safety Sidelined, Sovereignty Centered

The Paris AI Action Summit, hosted by France on February 10-11, was the third in the series that began at Bletchley Park in November 2023 and continued in Seoul in May 2024. Reading the closing communiqué against its predecessors is jarring. Where Bletchley produced a substantive statement on frontier safety and Seoul produced concrete commitments on safety institutes, Paris produced something closer to a diplomatic placeholder. The U.S. and the U.K. did not sign the communiqué.

The Summit's deliberate framing as an "Action" Summit, rather than a "Safety" Summit, was the first signal. The agenda was dominated by competitiveness, sovereignty, and infrastructure questions. Safety appeared but in supporting role.

What the communiqué says (and doesn't)

The signed communiqué — endorsed by 60 countries, including France, Germany, China, India, Japan, and most of the EU — emphasizes:

What is conspicuously absent: any commitment to pre-deployment evaluation of frontier models. Any mention of catastrophic-risk thresholds. Any continuation of the Seoul commitments to fund AI Safety Institutes. The frontier safety architecture that Bletchley initiated is, on the face of the Paris text, dormant.

Why the U.S. and U.K. did not sign

For different reasons.

The U.S. position, articulated by Vice President Vance in his Summit address, was that the Paris text included "ideological policy items" inconsistent with American economic interests. The reference was primarily to the inclusiveness and sustainability framings, which the new administration views as proxies for content-moderation and energy-policy commitments it does not want to undertake. The Vance speech was more broadly an industrial-policy intervention: a clear statement that the U.S. will pursue AI leadership unilaterally and is unwilling to accept multilateral constraints that slow U.S. development.

The U.K. position is harder to read. The Starmer government has been signaling a pivot from Sunak's safety-centric framing, and the U.K.'s recent rebranding of its AI Safety Institute to "AI Security Institute" — concentrating on national security applications rather than catastrophic risk — was on display. The U.K.'s Paris position appears to have been calibrated to align with its U.S. counterpart, but the messaging from Number 10 has been deliberately oblique.

What survived from Bletchley/Seoul

Perhaps less than meets the eye. The international network of AI Safety Institutes, formalized at Seoul, technically continues. The institutes themselves persist as a matter of domestic legislation and budget. But the coordinating function — joint evaluations, shared protocols, mutual recognition of test results — does not yet exist beyond bilateral arrangements between the U.S., U.K., Japan, and Singapore.

The Frontier AI Safety Commitments — a set of voluntary commitments by sixteen leading AI labs at Seoul to publish their safety frameworks and respect compute-threshold "red lines" — remain in force as private commitments. They were not retracted at Paris but were not reaffirmed either. Several labs have published updated frameworks; several have not. Compliance is, charitably, uneven.

What Paris means for hard law

Two implications worth flagging:

First, the EU AI Act's GPAI systemic-risk regime is now, by some distance, the most concrete piece of international AI safety law. The Article 51 thresholds and Article 55 obligations are operational; the Code of Practice will be finalized this year. Without a parallel U.S. framework or international coordination structure, the EU regime is going to be the de facto standard frontier model labs are pulled toward, regardless of the home jurisdiction. We have argued before that Brussels-effect dynamics will dominate this space; Paris removes the last obstacle to that prediction.

Second, sovereignty framings are going to displace harmonization framings in international AI policy for the next several years. Countries are going to focus on their own AI ecosystems, their own infrastructure, their own labor-force preparation. Multilateral AI governance will continue but at a much lower altitude than the 2023-24 vision implied. For practitioners, this means more domestic divergence and harder cross-border compliance.

What we'll be watching

  1. Whether the U.S. AI Safety Institute (now the AI Standards and Innovation Center, per draft renaming language) continues meaningful technical engagement with its international counterparts. Working-level cooperation can outlast political-level estrangement, sometimes for years.
  2. The next summit, scheduled for India in 2026. The Indian government has signaled an interest in development-focused AI governance and has publicly endorsed the Paris framing. The diplomatic drift is likely to continue.
  3. Bilateral arrangements between the EU and individual non-EU jurisdictions — particularly Korea, Japan, and Brazil — that may produce something like a "EU-aligned" bloc on AI governance even without U.S. participation.
  4. Whether the AI labs themselves voluntarily continue Seoul-style safety commitments. Anthropic's responsible scaling policy update last week suggests yes, at least for some labs; OpenAI's recent restructuring of its safety work suggests less so.

The summit series has not ended. But the high-level coordination ambition has, for now, run its course. The substantive work moves to Brussels, to national capitals, to the labs' own published frameworks, and to the courts.