Industry News for Business Leaders
Artificial IntelligenceFeatured

EU AI Act: Different Rules for Different Risk Levels

EU AI Act: Different Rules for Different Risk Levels
The upcoming European AI Act provides an overview of the risks and addresses them through various rules. What are these rules exactly? (iStock)

Last week, Europe reached an agreement on regulations for artificial intelligence within European territory. This deal is set to be voted on in the spring for implementation in 2025. It will be the world’s first AI regulation. While several questions remain unanswered, this upcoming regulation provides an overview of the risks and addresses them through various rules. What are these rules exactly? 

While the potential benefits of AI, such as disease cure, increased productivity, and solutions for climate change, are widely acknowledged, it is essential to recognize that these advancements come with inherent risks. These risks encompass a variety of types, each demanding careful consideration and strategic mitigation.

The Need for Regulation

Various discrimination biases were highlighted with a notable example—the Dutch child benefit scandal. The Dutch government implemented an application to manage benefits, but unfortunately, it exhibited significant bias against foreigners, leading to a major scandal and subsequent discontinuation of the application.

Beyond bias, there are multifaceted risks in terms of safety and security. Automated systems have the potential to propagate fake news, engage in cyber warfare, and pose threats in the realm of bioterrorism.

Privacy also emerges as a substantial concern. While not all machine learning systems rely on personal data, a significant majority do. Moreover, these systems not only use personal data for training models but also can infer and predict personal data, raising valid concerns about privacy implications for individuals.

Various Initiatives

In recent times, there have been calls for AI regulation beyond Europe. The G7 introduced the Hiroshima AI process, outlining key principles. The US unveiled an AI Bill of Rights, and the UK addressed AI safety at the AI Safety Summit. UNESCO published ethical AI recommendations, while the OECD provided AI principles. 

Even big tech cannot ignore the question. Last September, Brad Smith, Microsoft Vice Chair and President in a post, called for an effective legislative framework to regulate AI. In his view, it is now imperative to establish regulations for AI, similar to what has been done for numerous technologies developed before that posed potential harm if they were to fail.

“Governments require circuit breakers in buildings to protect against fires caused by surges in electricity. They require school buses to have emergency brakes, in case the main brakes fail, and require bus drivers to be trained in how to use them. They require airplanes to have collision avoidance systems installed, and to ensure that pilots can make decisions based on those systems in safety-critical situations.”

More recently, OpenAI announced the formation of a new AI safety advisory group. It will offer recommendations directly to the board of directors which are also given veto power over decisions about AI safety.

As part of its digital strategy, the European Union aims to regulate artificial intelligence to ensure favorable conditions for the development and use of this technology. The EU governments and Members of the European Parliament reached an agreement governing artificial intelligence systems including  ChatGPT, on European territory.

The upcoming AI Act adopts a risk-based approach, categorizing AI systems into four distinct categories. This determines the extent of regulation.

What are the risks and regulations associated as envisioned in the EU AI Act? Here are a few key points.

The Risk Mapping

The EU has devised a risk mapping system to outline responsibilities for both providers and users, depending on the risk level associated with artificial intelligence. Even for AI systems with minimal risk, there is a requirement for assessment. The objective is to inform users when they are engaging with AI, covering systems involved in generating or manipulating image, audio, or video content, such as deep fakes.

Here are the 4 risk categories.

Black: Unacceptable Risks

AI systems deemed as presenting unacceptable risks, and posing threats to individuals, will be strictly forbidden. These include:

1. Cognitive behavioral manipulation of individuals such as voice-activated toys encourages hazardous behavior in children.

2. Social scoring, involves the classification of people based on behavior, socio-economic status, or personal characteristics that could impact citizens’ access to loans or education.

3. Biometric identification systems like facial recognition.

Some exceptions may be allowed: For instance, “post” remote biometric identification systems where identification occurs after a significant delay will be allowed to prosecute serious crimes but only after court approval.

Red: High Risks

Red AI systems, indicating high risk, will be subject to regulation, potentially requiring audits or certifications before they are allowed on the market and along their lifecycle.

Blue: Limited Risks

The blue systems represent a limited risk, suggesting a form of self-regulation with a risk analysis conducted by the company before the deployment of a product.

White: Minimal Risks

The white systems represent minimal risks, either because they don’t use personal data or automate tasks without significant impact. However, such a system must adhere to basic transparency requirements, enabling users to make well-informed decisions. They can then decide whether to proceed with their usage. 

What About ChatGPT and Generative AI?

Generative AI models including ChatGPT will have to adhere to transparency standards. For example, it will be imperative to indicate when content has been generated by AI. To prevent the generation of illegal content, model design measures will need to be implemented. Lastly, public summaries of copyrighted data utilized in a training process will have to be provided.

Outstanding Questions

Ongoing discussions and negotiations, especially regarding how to regulate large language models (LLM) or foundation models, are part of the trialogue at the European level involving the commission, parliament, and governments. The key question is whether to regulate the applications of AI or the technology itself.

During a conference at Sophia Summit last month, Claude Castelluccia, Research Director at INRIA and member of the French National Commission on Informatics and Liberty (CNIL), said:

“The response to this issue remains unclear, with considerable lobbying efforts from companies developing foundation models advocating for regulation at the application level. But there is also lobbying from those working on applications utilizing foundation models to push for regulation at the technology level. The outcome of these negotiations is uncertain at this point.”

There are also questions when applying GDPR to machine learning as incorporating user rights under the GDPR becomes complex. 

“For instance, if a user requests the removal of their data from a database used to train a model, it raises intricate questions. Does this necessitate retraining the model? Federated machine learning could address such challenges.”

Federated learning is a technology that allows the training of high-quality models using data distributed across independent centers. Instead of consolidating data on a single central server, each center keeps its data secure, while the algorithms and predictive models move between them.

With this type of training, the data is safe.

Challenges

But there are remaining challenges, funding. While billions of dollars are spent annually to enhance the power of AI, the funding allocated for research to make AI understandable, free from bias, and safe is comparatively minimal, explained Mr. Castelluccia:

“I have a proposal for funding organizations: it would be beneficial if, for every €1 spent on developing AI, an additional €1 is dedicated to securing AI. If we examine the research conducted at AI institutes, there is a notable lack of emphasis on AI safety. Concrete programs and funding must be allocated to research in this critical area. I have advocated for increased investment in AI security for many years because, without adequate funding, research in this field cannot progress.”

Advertisement
Advertisement
Advertisement
Advertisement
pub
Advertisement