Industry News for Business Leaders
Artificial IntelligenceEuropeFeatured

EU AI Act: What Will It Mean for European Companies?

EU AI Act: What Will It Mean for European Companies?
After three days of intense negotiations between the member states and the European Parliament, the EU reached a groundbreaking legislation late on Friday night to regulate artificial intelligence on European soil. (iStock)

The governments of the European Union and Members of the European Parliament reached an agreement on Friday evening on rules governing artificial intelligence systems, such as ChatGPT, on European soil. The text is aimed at promoting innovation in Europe while limiting potential abuses of these highly advanced technologies. What does this mean in concrete terms for businesses? Here are some key points.

After three days of intense negotiations between the member states and the European Parliament, the EU reached a groundbreaking legislation late on Friday night to regulate artificial intelligence on European soil.

Immediately, Thierry Breton, the European Commissioner for the Internal Market leading the project, reacted on X:”

“Historic! The EU becomes the very first continent to set clear rules for the use of AI. The AI Act is much more than a rulebook — it’s a launchpad for EU startups and researchers to lead the global AI race.”

European lawmakers have been hard at work to finalize the AI Act, the most comprehensive and wide-reaching AI regulation in the world, to be able to vote it into law in early 2024.

The law will dictate the rules that EU companies developing or using AI tools must abide by to protect the public from the inherent risk of products and services based on artificial intelligence.

After long discussions, lawmakers have reached a provisional agreement on the law’s structure and tiered risk approach. The more debated points during the trilogue that led to the current deal were the regulation of AI used for biometric surveillance (e.g., face recognition) and the rules to apply to foundational models, a group of general-purpose AI models that include generative systems such as OpenAI’s ChatGPT, Google’s newly announced Gemini, and many others.

While the first point is extremely relevant for society at large, the regulation of foundation models is paramount for understanding how the law could affect EU-based companies that develop or work with AI models and tools.

4 Categories of AI Risk

The law is based on four tiers defining categories of AI risk based on the scope and sector of the applications, from those that don’t pose any risk to those that pose an unacceptable risk and should be outright banned (like biometrics used for tracking and profiling).

France, Germany, and Italy, with a non-paper presented in November 2023, opposed the application of a separate tiered approach to foundation models, claiming that doing so could harm EU-based competitors of the foreign firms controlling the largest models such as Aleph Alpha in Germany and Mistral AI in France.

The current provisional agreement keeps the tiered approach but includes an automatic categorization as “systemic” for foundation models based on the computing power used to train them. A final decision on the categorization of models will come from the AI Office, a specific institution envisioned by the new law that will overview the application of the AI-Act in the Union.

Regardless of the holdup, the provisional agreement on the law still considers companies that use or develop AI Solutions as providers, importers, distributors, deployers, and/or product manufacturers.

Model Repository

While practical details will have to be sorted out later on, it’s already quite clear that under the new law, companies will have to understand, first and foremost, what models are in use in their services or products and under what category they would fall, then proceed to create an always up-do-date model repository.

Based on this, the models should be classified by risk according to the tiers defined by the AI Act and, of course, the sectors in which the company operates.

Based on the risk level, requirements may vary. One common basic requirement, though, will always be transparency: companies will have to inform their clients or users that they implemented AI models in their products or services for the final user to make an informed decision. While these requirements somewhat define the compliance process, it’s hard to forecast how burdensome it will be for companies and SMEs in the future.

For the EU industry, the law could prove quite complex to navigate. The law considers as high risk any application of AI models operating or acting as “safety components” in a flurry of categories that go from medical devices to civil aviation security, including all that can be defined as machinery.

Models for high-risk applications must be registered in a new EU-wide database and approved before use. Implementing them will inevitably require novel and dedicated risk management systems, logging capabilities, and human oversight. While the requirements might sound daunting, they won’t necessarily be more complicated than other risk assessments currently mandatory for all industrial systems in the EU.

Commenting on the political agreement on the law, EU Commission President Ursula Von der Leyen added that:

“Until the Act is fully applicable, we will support businesses and developers to anticipate the new rules. Around 100 companies have already expressed their interest in joining our AI Pact, by which they would commit voluntarily to implement key obligations of the Act ahead of the legal deadline.”

A Vote in Spring

The final deal reached on Friday will get readied for a vote at the EU Parliament in spring. EU legislators are trying to work fast to pass the law before the new EU elections in June, when the legislative process will stop for months.

Failing to vote on the AI Act before then might squander the regulatory advantage the EU has accrued in the last couple of years, with the risk that all forms of AI application remain unregulated for years. If the law is voted on time, some smaller parts of the legislation might already start getting into effect in 2024. Nevertheless, most rules will probably be effective only from 2025 or even 2026.