What Factors Are Influencing the Uncertain Future of the EU AI Act? Insights from the OpenAI Controversy

What Factors Are Influencing the Uncertain Future of the EU AI Act? Insights from the OpenAI Controversy

Sign up for our daily and weekly newsletters for up-to-the-minute updates and exclusive content on leading AI industry coverage.

The EU AI Act, a potentially landmark legislation for AI regulation, is currently facing a deadlock due to disagreements over the regulation of ‘foundation’ models—AI systems trained on a large scale such as GPT-4, Claude, and Llama.

Recently, the French, German, and Italian governments have proposed limiting regulation on these foundation models, a stance many attribute to heavy lobbying from Big Tech and open source companies like Mistral, whose advisor is former French digital minister Cédric O. Critics argue that this move could weaken the EU AI Act significantly.

On the other side, proponents of regulating foundation models within the EU AI Act are pushing back. A group comprising German and international AI experts, business leaders, and academics published an open letter urging the German government not to exempt these models from the regulation, warning that doing so could compromise public safety and harm European businesses. This letter was signed by notable AI researchers such as Geoffrey Hinton and Yoshua Bengio, along with AI critic Gary Marcus. Additionally, French experts, including Bengio, wrote a joint op-ed in Le Monde, criticizing the ongoing efforts by Big Tech to undermine this crucial legislation, as reported by a Future of Life Institute spokesperson.

Despite the anticipation of wrapping up, the EU AI Act has hit a snag in its final phase. Initially proposed two and a half years ago and now in the trilogue stage—where EU lawmakers and member states finalize the bill—there’s a pressing hope to vote on it by the end of 2023 before the 2024 European Parliament elections influence political dynamics.

The recent turmoil at OpenAI could shed some light on the situation. OpenAI saw CEO Sam Altman ousted and subsequently reinstated amid internal conflicts. This mirrors the current EU debate, where there’s a clash between those focusing on AI’s commercial potential and those worried about its existential risks.

At OpenAI, the conflict was between those like Altman and Greg Brockman, aiming for commercial gain to support the development of artificial general intelligence (AGI), and board members concerned about AI safety willing to halt high-risk technologies. The board members who ousted Altman had connections to the Effective Altruism movement, which is also involved in lobbying related to the EU AI Act. Max Tegmark, president of the Future of Life Institute and tied to Effective Altruism, has been reported as part of efforts advocating that AI could pose existential risks.

Big Tech, including OpenAI, has been lobbying intensely against stringent AI regulations in the EU. TIME revealed that while Altman publicly supported global AI regulation, OpenAI was simultaneously lobbying to reduce the regulatory burden. This has led some, like Gary Marcus, to argue that Big Tech shouldn’t be allowed to self-regulate, emphasizing the need for robust regulations in the EU AI Act.

Brando Benifei, one of the European Parliament negotiators, pointed out that the recent OpenAI fiasco highlighted the risks of relying on voluntary agreements from tech leaders.

As the December 6 trilogue approaches, with Spain’s Council Presidency nearing its end and Belgium set to take over in January 2024, there is increasing pressure to finalize the EU AI Act. A failure to reach an agreement could be a significant setback, especially since the EU has long aimed to be a global leader in AI regulation.

Stay informed! Get the latest news delivered to your inbox daily. Subscribe now.