B&R’s AI Act series - The risk-based approach of the AI Act

As already exposed in Episode 1 of B&R AI Act’s series, the AI Act, proposed by the European Commission (COM 2021/0206) has been approved by the European Parliament on March 13, 2024. It is a legislative framework aiming to regulate the ethical use of artificial intelligence (the “AI”) within the European Union (the “EU”), addressing challenges and promoting responsible innovation.

  1. What is the risk-based approach of the AI Act?

The risk-based approach of the AI Act refers to a regulatory strategy that assesses and addresses the potential risks associated with the development, deployment, and use of artificial intelligence (AI) systems. The AI Act proposes a four-tiered risk classification system:

 

AI Act ok
  1. Prohibited AI systems 

In the context of the AI Act, AI systems that pose significant risks to the health, safety, and fundamental rights of individuals are prohibited. These AI systems are deemed inherently harmful or unethical and are thus banned.

For instance, an AI system presenting an unacceptable risk is social scoring, whether conducted by public or private entities. Social scoring involves the automated analysis of individuals’ behavior, activities, or personal data to assign them a score that can influence various aspects of their lives, such as access to employment, housing, or financial services. Such systems can lead to discriminatory practices, violations of privacy, and erosion of fundamental rights, ultimately undermining societal trust and cohesion.

Real-time remote biometric identification systems in the public for law enforcement purposes or facial recognition databases based on untargeted scraping are also considered as prohibited AI systems.

  1. High-risk AI systems

The AI Act considers two types of AI systems that are regarded as high-risk:

  • AI intended to be used as a product covered by specific EU legislation, such as civil aviation, vehicle security, marine equipment, toys, pressure equipment and personal protective equipment, and
  • AI systems listed in Annex III, such as remote biometric identification systems, AI used in education, employment, credit scoring, law enforcement, migration and the democratic process.

Before deploying such systems in the market, they must undergo thorough evaluations to ensure they comply with regulatory standards, safeguarding against potential risks such as discrimination or unfair treatment of individuals based on their financial status. This regulatory approach aims to mitigate the potential harms associated with high-risk AI systems while promoting their responsible development and deployment.

The obligations for AI high risk systems would differ between the provider and the deployer and they should comply with the following requirements respectively (which are already partially covered in the CSSF white paper dated 2018).

Providers’ obligations (a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model and places them on the market or puts the system into service under its own name or trademark, whether for payment or free of charge):

  • Comply with requirements for high risks AI systems (Title III Chapter 2) (Art 16);
  • Establish a quality management system;
  • Documentation keeping;
  • Logging obligations;
  • Conduct pre-market conformity assessment;
  • Register the AI System in the EU Database (Art 49 & 51), and
  • Conduct post market monitoring and take corrective actions in case of non-conformity.

Deployer’s obligations (any natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity);

  • Operate AI system in accordance with instructions of use;
  • Ensure adequate human oversight;
  • Ensure that input data is relevant for the intended purpose;
  • Monitor operation for possible risks and inform provider;
  • Inform the provider/ importer/ distributor/market surveillance;
  • Authority about any serious incident, and
  • Where applicable, conduct a data protection impact assessment and a fundamental rights impact assessment.
  1. Limited risk AI systems

Limited-risk AI systems will be obligated to comply with the transparency requirements (Art 52). These entail informing users when they are engaging with software rather than a human and disclosing when content has been artificially generated. The AI system, the provider or the user must inform any person exposed to the system in a timely, clear manner when interacting with an AI system, unless obvious from context.

Where appropriate and relevant include information on which functions are AI enabled, if there is human oversight, who is responsible for decision-making, and what the rights to object and seek redress are.

For example, chatbots (ChatGPT) and deepfakes fall under this category.

  1. Minimal risk AI Systems

AI systems categorized as having minimal or no risk are allowed without additional obligations under the AI Act. The European Commission may publish a voluntary code of conduct in future that you may be asked to comply with on a voluntary basis (Art 69).

For instance, spam filters and AI-compatible video games fall into this category.

How B&R can help me?

Our cross-practice teams of specialists provide the extensive and all-encompassing legal knowledge required to navigate the multidisciplinary intersection of AI and the AI act, such as data protection, regulatory, compliance, corporate, contracts etc.

If you have any questions regarding the AI Act, please contact us at welcome@brouxelrabia.lu 

Samia RABIA, Partner Brouxel and Rabia Luxembourg Law Firm
Samia RABIA
Partner
Miroslava Dudas, Counsel
Miroslava DUDAS
Counsel
Margot GEISLER Brouxel & Rabia Luxembourg Law Firm
Margot GEISLER
Associate