This publication is part of EU Cyber Direct – EU Cyber Diplomacy Initiative’s New Tech in Review, a collection of commentaries that highlights key issues at the intersection of emerging technologies, cybersecurity, defense, and norms.
The latest surge in large generative artificial intelligence models (LGAIMs) mirrors humanity’s age-old fascination with—and fear of—building machines with human-like intelligence and consciousness. Rapid breakthroughs in this field promise to disrupt humanity’s monopoly on knowledge production and content creation.
Generative artificial intelligence (AI) encompasses deep-learning models that are trained on vast databases to generate high-quality text, images, audio, video, code, and other content in response to prompts.
According to a report by McKinsey, generative AI models could contribute trillions of dollars of value to the global economy. They could yield remarkable results in some areas while falling short in others, shedding light on future benefits and risks.
The European Union has also recognized generative AI’s transformative potential. EU lawmakers are currently negotiating legal guardrails as part of the trialogues on the EU Artificial Intelligence Act—with the prospect of reaching a political deal still this year.
While AI development has experienced many cycles of hype or AI “winters,” nowadays even skeptics seem to recognize that the release of ChatGPT in November 2022 marked a pivotal turning point—perhaps a point of no return. Of note is the mounting hype surrounding the disruptive potential of generative AI models, their association with so-called superintelligence, and the urgent need for international and European governance frameworks.
Industry leaders, civil society representatives, and experts are all raising alarms about the threats posed by the wave of ChatGPT, DALL-E, Midjourney, Claude, and Bard-like models.
OpenAI announced on September 25 that ChatGPT “can now see, hear, and speak,” a significant upgrade that grants the model powerful and multimodal abilities. Users can now have voice conversations or share images with the chatbot beyond text.
A statement released by the Center for AI Safety reflects a broader worry about the technology: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Countless media reports echo this concern, while industry leaders call for AI regulation, warning of “existential risk.”
During her 2023 State of the Union speech, European Commission President Ursula von der Leyen directly quoted the Center for AI Safety on “the risk of extinction from AI.”
She noted that AI is a general technology that is accessible, powerful, and adaptable for a vast range of uses, both civilian and military. Von der Leyen argued that Europe, together with partners, should “lead the way on a new global framework for AI, built on three pillars: guardrails, governance and guiding innovation.”
Yet, despite such warnings, the “arms race” between tech giants to invest in and hastily roll out new capabilities continues. For instance, on September 25, Amazon stepped up the AI race by investing $4 billion into the startup Anthropic. The “existential risk” framing that equates generative AI to nuclear extinction, and the dystopic threat of superintelligence, rather than inviting caution, seems to feed into the corporate hype machine. This might be the latest smokescreen that distracts policymakers from more pressing risks. Such exaggerated portrayals highlight Silicon Valley’s power to disrupt and reveal its stronghold on the public imagination.
Critics also warn against trusting calls for EU regulation from tech leaders, as current business decisions to rapidly deploy models to the market seem to run counter to their calls for regulation, caution and safety warnings. A lack of binding regulations and sufficient restraints may tempt companies to release products that have been inadequately tested or insufficiently subjected to red teaming—a broad range of risk assessment and stress testing methods.
Leading AI expert Geoffrey Hinton voiced concern about the power of such systems to manipulate humans, as well as the “alignment problem:” how to ensure that the technology aligns with human intentions. Besides, models have been found to hallucinate or confabulate—a phenomenon whereby they provide responses by incorporating fabricated or nonsensical data that appears authentic.
This poses regulatory, global governance, and security dilemmas for the EU. So, what is the best course of action for Brussels?
Firstly, it is essential to unpack the hype surrounding generative AI, how it shapes European policymaking, and to understand how to best regulate technologies that are constantly evolving.
Generative AI must align with democratic goals, promote global progress, and not deepen the digital divide. That is why the EU’s trustworthy AI approach must be adaptable and future-proof. The EU’s AI Act, the bloc’s flagship risk-based initiative to regulate use cases, until its full operationalization, may require revisions to account for the opportunities and risks posed by continuously advancing generative AI models.
Secondly, Brussels must up the game in promoting international cooperation, confidence-building measures, and norms for trustworthy AI. The global governance of AI is fragmented due to distinct visions, different value hierarchies, and conflicting national security interests.
The international AI norms landscape is becoming increasingly competitive in an already-crowded regime complex of international organizations, standards, principles, and codes of conduct. That is why it is vital for the EU to work with partners like the United States in the context of the EU-U.S. Trade and Technology Council and with Japan as part of the EU-Japan Digital Partnership.
Another step in the right direction is the Commission’s October 13 initiative to launch a stakeholder survey on draft International Guiding Principles for organizations developing advanced AI systems, which have been agreed by G7 ministers for stakeholder consultation. These principles are currently being developed by G7 members under the Hiroshima Artificial Intelligence process to set up guardrails on a global level.
Lastly, the EU must recognize that generative AI has the potential to significantly transform both the defensive and the offensive parts of (cyber)security, from enabling security teams to better prepare for and respond to threats, to generating sophisticated social engineering attacks. Such models have also proven to be capable of creating code that can be used for malicious purposes, and they raise concerns related to data protection and copyright issues. Generative AI can be weaponized in ways that have adverse implications for national security, including elections and battlefield command and control.
As we approach an AI-disrupted future, the question looms: Will AI become omnipresent and elevate every aspect of human life—what some call “AI everywhere”—or plunge it into existential uncertainty?
For the EU and its member states, these are complex governance, regulatory, and innovation challenges.
This publication has been produced in the context of the EU Cyber Direct – EU Cyber Diplomacy Initiative project with the financial assistance of the European Union. The contents of this document are the sole responsibility of the author and can under no circumstances be regarded as reflecting the position of the European Union or any other institution.