In November 2023, the UK’s AI Safety Summit helped set the tone for the global conversation on AI, with strong U.S. support. It spotlighted risks from misuse or loss of control of so-called frontier AI systems with dangerous capabilities. But the momentum generated by the summit faces a bump in the road: France, the host of the next summit in February 2025, has deprioritized those risks in order to elevate its own vision for the future of AI governance. This vision, rooted in the principles of openness and innovation, appears at least in part designed to oppose the U.S. and UK’s securitization of the space. It’s an appealing message, and France is well-placed to push it—but it needs some polishing before it can be a compelling alternative for the global community.
The U.S. and UK’s push for attention on frontier AI risks culminated in the Bletchley Declaration, a joint statement signed by twenty-eight countries at the UK summit, including France. But as the framing motivates domestic and international policy (the most substantial elements being voluntary commitments from leading AI companies and the establishment of national AI Safety Institutes), France has resisted. Despite being champions of EU digital regulation, France almost sank the EU’s AI Act when it sought to weaken new restrictions on powerful AI systems. And although France has signed the international declarations positing the grave threats AI may pose to humanity, it has downplayed those risks at the same time.
Reasonable people can disagree about the severity and imminence of risks from frontier models’ advanced capabilities. But the key geopolitical issue remains the resulting regulatory actions. An influential report by France’s government-appointed AI commission offers a window into France’s concerns. The report, led by the same person responsible for organizing the French summit, largely dismisses the risks as hype. Instead, it suggests directly and indirectly that the risks are being used to legitimize barriers to entry that would lead to the concentration of AI development in the already dominant players. In large part, these are U.S. big tech companies and AI startups.
A recent U.S. executive order, for example, requires U.S. providers of computing infrastructure to report to the U.S. government when foreign actors train powerful AI models—a regulation nominally motivated by the risks these models pose. The French AI commission report labels this an attempt to slow down foreign AI development, facilitate economic espionage, and reinforce American domination (apparently unimpressed by the reporting requirements the United States also imposed on its own AI companies).
This possibility is especially grating to Paris. President Emmanuel Macron, in Gaullist tradition, has repeatedly advocated for a more multipolar world and a European “third way,” separate from the United States or China. In response, France’s rallying call is for an “open” innovation ecosystem that avoids the network effects and concentration that have kept away challengers to U.S. big tech. This is not a new look for France: Macron has been promoting openness in the AI space since at least 2018. And outside of AI, Paris has emphasized the development of open-source software and digital commons as key components of building European and French sovereignty for at least a decade.
France is uniquely well-suited to champion this position and become a lasting force in global AI governance. On innovation, although far from leading, France is a credible player. The French AI startup Mistral, whose strategy prominently features open-source AI, was recently valued at $2 billion and is building globally competitive, state-of-the-art models. According to Stanford’s Institute for Human-Centered AI 2024 AI Index report, France is third in the world in terms of producing notable models, behind the United States and China.
Globally, France has influence. It’s a key member of the organizations driving research, global principles, debate, and even regulatory efforts on AI, including the Organisation for Economic Co-operation and Development, the United Nations, the Global Partnership on Artificial Intelligence, the AI summits, and the G7—and, of course, the EU. Being less closely associated with big tech interests, and being in a position where, like many, it can’t take the strength of its domestic AI industry for granted, France may better represent the global community than the United States. Global powers such as China and India have also found France to be a more reliable and less judgmental partner than Washington or Brussels. As a case in point, France and China published a joint declaration on global governance of AI in early May. France’s streamlined executive power typically helps ensure internal politics do not derail partnerships.
What’s left for France is to leverage its position and create a compelling path forward for the global community. Given that many countries’ biggest concern with AI is missing out on its opportunities, openness and innovation are more inspiring than a focus on future risks. South Korea’s interim summit dropped “safety” from the name and added innovation and inclusion as core pillars. But to build a coalition, France will need to convince others that its concerns are genuine—and not simply about protecting France’s own would-be international AI champions.
Going further, France will need to convince others that there is value in resisting U.S.- and UK-led securitization of the space. Otherwise, security-focused regulatory approaches will be all too easy to accept as the price of access to the technology, even if they end up concentrating development in a few countries.
On risks, Paris needs to go further than simply denying the current framing. Although the French AI commission’s report calls existential risks from powerful AI systems hypothetical, France has public and expert opinion to contend with, both of which point to caution. France is also subject to the EU’s AI Act, which is set to regulate misuse and loss of control risks from powerful models, among many other risks. Trying to renege on previous areas of agreement is more likely to antagonize allies and sow confusion than lend credibility to its vision.
Instead, France will need to offer an affirmative vision for addressing AI’s risks. One approach would be to include risks that are more salient to the international community. The French AI commission’s report has started this, arguing that existential risks from powerful AI systems should not drive policy and instead emphasizing broad risks such as concentration of power over AI systems, labor disruptions, and the erasure of languages and cultures. Work remains in highlighting that the solutions to those are tangential to, or even in tension with, the securitization of the space—and that France’s call for open and collaborative innovation ecosystems is a better solution.
France’s AI summit will be a unique chance to reshape the global AI conversation around principles of openness and innovation, as well as an opportunity to cement itself as a leader in the space. France’s message has the potential to be compelling to many, and it has the influence and credibility required to promulgate it. But to do so, France will need to make the value proposition for others more concrete and navigate a shift in focus without antagonizing allies.