How to Future Proof Europe's AI Office

How to Future Proof Europe's AI Office
Spanish Prime Minister Pedro Sánchez met with OpenAI CEO Sam Altman in Madrid on May 22, 2023. Photo courtesy of La Moncloa/Creative Commons/Flickr

As Big Tech races to advance artificial intelligence systems, European regulators must keep pace with their approach to enforcement.

By TONI LORENTE & NICOLAS MOËS

When European legislators reached an agreement in December 2023 to regulate the use and development of artificial intelligence, it set the standard for policymakers globally looking to safeguard the public from harms associated with rapidly advancing technologies.

The agreement was a feat of political negotiation, too. The EU AI Act, which took nearly three years to develop, culminated in a final 38-hour negotiation to refine the details of the legislation that should take effect in spring 2024. But even with all the hard work that went into the act meant to protect Europeans from potential AI dangers – from replacing human workers to deepfakes to super-powered disinformation campaigns – the real test for the EU will be whether the legislation, and the new European entity that will enforce it, can adapt and evolve as quickly as AI and machine learning systems going forward.

The first version of the AI Act was drafted by the European Commission 18 months before ChatGPT was released in November 2022. As the world began to learn about the promise and potential consequences of large language models, the European Parliament acted quickly to develop new provisions that would address expectations around so-called general-purpose AI. It sought to require companies such as OpenAI to comply with transparency guidelines and for general-purpose AI models with systemic risks to perform model evaluations, assess and mitigate systemic risks, report incidents, and ensure an adequate level of cybersecurity.

The technological leap represented by ChatGPT and its various releases could happen again, and quickly, given the vast amount of global investment going into AI startups. The big question regarding the AI Act and the soon-to-launch European AI Office, which will ensure tech companies comply with AI regulation, is whether they will be capable and adaptable enough to adjust to technological advancements. Those advancements can potentially bring about more change and disruption to the global workforce than the Industrial Revolution.

As AI systems become more powerful and ubiquitous – and potentially more invasive – the AI Office must continuously evaluate and upgrade enforcement and compliance mechanisms as tech companies and AI researchers make state-of-the-art advancements in intelligent systems. This requires a new kind of approach to enforcement bodies.

By adapting enforcement and compliance mechanisms to artificial intelligence technologies, regulators can reduce the friction that naturally exists between static regulation and the dynamic nature of technological innovation. This approach can easily accommodate evidence-based requirements and demonstrate the effectiveness and adequacy of various AI safeguards. It’s also a way to future-proof AI regulations.

But to really achieve effective enforcement and compliance, the AI Office must collaborate closely with industry, academia, and civil society to ensure compliance isn’t overtaken by regulatory capture. By leveraging the knowledge of independent experts (such as the Scientific Panel) and including all the relevant stakeholders (such as the European Artificial Intelligence Board), the office has a rare opportunity to become a cutting-edge governance agency, capable of adapting compliance and enforcement mechanisms amid breakneck innovation, so that safety keeps pace.

It’s not just the speed at which AI systems are advancing that will be a challenge. The AI Office will also face immense pressure from Big Tech. The tech industry has already pushed hard to dilute the requirements of the AI Act; Google, Microsoft, and Meta spent an estimated 18.8 million euros on lobbying in 2022 alone.

In the shadow of these tech giants, however, there are some European startups emerging to offer potential alternatives to Silicon Valley’s dominance. Companies such as France’s MistralAI and Germany’s Aleph Alpha are seen by many as beacons of European innovation and competitiveness, promising a fair fight against U.S. tech behemoths.

In Europe and many other parts of the world, the conversation about AI has certainly shifted since the arrival of ChatGPT to focus more on safety and trustworthiness. Some companies at the forefront of AI development, such as OpenAI, Anthropic, and Microsoft, have started working on responsible scaling policies, model evaluations, and red teaming – a strategy to test the robustness of large language models. In addition, they’re developing protocols to identify AI-generated material, seeking to promote the social acceptance of their systems.

Both tech companies and regulators have stressed the importance of trustworthiness and safety of AI systems, but there’s still a lack of clarity on what both sides mean by “trust” or “safety.” What are the benchmarks that will evaluate the trustworthiness or safety of a system once it’s in use? This is certainly an area that needs more attention. Beyond that, there’s the issue of incentives and ensuring that tech companies invest enough in trust and safety measures. The AI Act does include legally binding requirements that force tech companies – not only those headquartered in Europe but all the companies aiming to participate in the European market – dedicate resources to safety research. The Future Society, an independent nonprofit organization with a focus on AI governance, estimated in December 2023 that, even for the most stringent set of requirements, the compliance costs for companies developing general-purpose AI models ranged between 0.07 and 1.34 of the total investment per model (for the models under the scope of this regulation).

Tech companies shouldn’t think of the legal requirements in the AI Act as nothing more than compliance items. They should embrace them as opportunities to strengthen the safety of their products and to mitigate the potential consequences of outcomes going sideways. Beyond that, tech companies should be compelled to collaborate with EU regulators, civil society, and independent experts to figure out what it means for AI systems to be considered trustworthy enough to be available in the European market. EU policymakers and regulators have a tremendous opportunity to shape the future development of these systems, both for Europe and the rest of the world.


Toni Lorente is a researcher at The Future Society focused on the safe development and deployment of emerging AI technologies.

Nicolas Moës is the executive director of The Future Society and an economist by training focus on the impact of General-Purpose Artificial Intelligence (GPAI) on geopolitics, the economy, and industry.