VAT treatment of directors’ fees (C-288/22)
On November 10, 2023, the Luxembourg Constitutional Court declared the minimum net wealth tax (“NWT”) applicable to certain entities unconstitutional…
As already exposed in Episode 1 of B&R AI Act’s series, the AI Act, proposed by the European Commission (COM 2021/0206) has been approved by the European Parliament on March 13, 2024. It is a legislative framework aiming to regulate the ethical use of artificial intelligence (the “AI”) within the European Union (the “EU”), addressing challenges and promoting responsible innovation.
The risk-based approach of the AI Act refers to a regulatory strategy that assesses and addresses the potential risks associated with the development, deployment, and use of artificial intelligence (AI) systems. The AI Act proposes a four-tiered risk classification system:
In the context of the AI Act, AI systems that pose significant risks to the health, safety, and fundamental rights of individuals are prohibited. These AI systems are deemed inherently harmful or unethical and are thus banned.
For instance, an AI system presenting an unacceptable risk is social scoring, whether conducted by public or private entities. Social scoring involves the automated analysis of individuals’ behavior, activities, or personal data to assign them a score that can influence various aspects of their lives, such as access to employment, housing, or financial services. Such systems can lead to discriminatory practices, violations of privacy, and erosion of fundamental rights, ultimately undermining societal trust and cohesion.
Real-time remote biometric identification systems in the public for law enforcement purposes or facial recognition databases based on untargeted scraping are also considered as prohibited AI systems.
The AI Act considers two types of AI systems that are regarded as high-risk:
Before deploying such systems in the market, they must undergo thorough evaluations to ensure they comply with regulatory standards, safeguarding against potential risks such as discrimination or unfair treatment of individuals based on their financial status. This regulatory approach aims to mitigate the potential harms associated with high-risk AI systems while promoting their responsible development and deployment.
The obligations for AI high risk systems would differ between the provider and the deployer and they should comply with the following requirements respectively (which are already partially covered in the CSSF white paper dated 2018).
Providers’ obligations (a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model and places them on the market or puts the system into service under its own name or trademark, whether for payment or free of charge):
Deployer’s obligations (any natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity);
Limited-risk AI systems will be obligated to comply with the transparency requirements (Art 52). These entail informing users when they are engaging with software rather than a human and disclosing when content has been artificially generated. The AI system, the provider or the user must inform any person exposed to the system in a timely, clear manner when interacting with an AI system, unless obvious from context.
Where appropriate and relevant include information on which functions are AI enabled, if there is human oversight, who is responsible for decision-making, and what the rights to object and seek redress are.
For example, chatbots (ChatGPT) and deepfakes fall under this category.
AI systems categorized as having minimal or no risk are allowed without additional obligations under the AI Act. The European Commission may publish a voluntary code of conduct in future that you may be asked to comply with on a voluntary basis (Art 69).
For instance, spam filters and AI-compatible video games fall into this category.
How B&R can help me?
Our cross-practice teams of specialists provide the extensive and all-encompassing legal knowledge required to navigate the multidisciplinary intersection of AI and the AI act, such as data protection, regulatory, compliance, corporate, contracts etc.
If you have any questions regarding the AI Act, please contact us at welcome@brouxelrabia.lu
On November 10, 2023, the Luxembourg Constitutional Court declared the minimum net wealth tax (“NWT”) applicable to certain entities unconstitutional…
Cookie | Duration | Description |
---|---|---|
cookielawinfo-checkbox-analytics | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics". |
cookielawinfo-checkbox-functional | 11 months | The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional". |
cookielawinfo-checkbox-necessary | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary". |
cookielawinfo-checkbox-others | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other. |
cookielawinfo-checkbox-performance | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance". |
viewed_cookie_policy | 11 months | The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data. |