DEV Community

Cover image for Is the EU Falling Behind in the AI Race?
Roger Oriol
Roger Oriol

Posted on

Is the EU Falling Behind in the AI Race?

The recent announcement that Meta's Llama 3.2 Vision models won't be available in the European Union has reignited discussions about the impact of EU regulations on AI innovation and accessibility. This development joins a growing list of AI technologies from major tech companies that are currently unavailable to EU users, including ChatGPT's Advanced Voice mode and Apple Intelligence, raising concerns about whether the EU might be falling behind in the global AI race.

The EU AI Act and Its Impact

In April 2021, the European Commission proposed the EU AI Act. This act classifies AI systems according to the risk they pose to users. The different risk levels mean more or less regulation. Furthermore, it sets some rules for General Purpose AI systems, like Meta’s Llama or ChatGPT. Model developers must provide technical documentation, instructions, comply with the Copyright Directive, and publish a summary about the content used for training. If the model is open, like Llama, and it presents a systemic risk, which Llama does, it must also conduct model evaluations, adversarial testing, track and report serious incidents and ensure cybersecurity protections.

According to the study by Stanford University “Do Foundation Model Providers Comply with the Draft EU AI Act?”, as of June 2023, no foundation models fully comply with the EU AI Act's requirements.

Meta and EU regulation

The situation becomes more complex when examining Meta's challenges with training their models in the EU. In June 2024, Meta faced a setback when the Irish Data Protection Commission requested a delay in training their large language models using public content from adult Facebook and Instagram users in the EU. Meta expressed disappointment with this decision, arguing it would hinder European innovation.

It seems like these requirements influenced Meta's decision not to release Llama 3.2 Vision models in the EU. However, this also looks like a retaliation tactic to pressure the EU to let Meta use private user data to train its models.

As a side note, according to Llama 3.2’s Use Policy, the restriction of use of these models specifically applies to companies and individuals based in the EU who wish to use and build services using these models directly - end users in Europe that use services built on these models are not affected.

EU Needs AI

A significant countermovement has emerged in response to these regulatory challenges. The "EU Needs AI" initiative, supported by prominent figures including Meta's Chief AI Scientist Yann LeCun, argues that fragmented regulation threatens Europe's competitive position in AI development. Their position statement emphasizes that "Europe has become less competitive and less innovative compared to other regions and it now risks falling further behind in the AI era due to inconsistent regulatory decision making."

What Does the Future Hold for AI in the EU?

In my opinion, probably the pushback in regulation is overblown. I believe that, even with these regulations in place, the EU will still have access to some amazing models, and its citizens will enjoy more ethical AI practices.

While companies from Europe will need to comply with many more requirements as companies from the rest of the world, those requirements are not out of place or unnecessary. Many of the requirements for General Purpose AI systems, systemic risk or not, described in the EU AI Act, are from practices that are already widely in place and considered good practices from reliable models. Also, citizens in the EU will be able to enjoy more privacy and security than any other citizen in the world. Their private data, including pictures, won’t be used for training models. Abusive AI systems like social scoring and manipulative AI won’t be a problem for them.

In conclusion, the EU will not be falling behind in the AI race. It will keep being competitive, in part thanks to its more ethical practices. And AI regulation is not only not a blocker, but also a necessity, and more countries should follow suit.

Top comments (0)