Artificial Intelligence Act | Legal Flash nr. 106
Artificial Intelligence Act | Legal Flash nr. 106
Regulation (EU) 2024/1689 laying down harmonized rules on artificial intelligence (Artificial Intelligence Act/AI Act) was published in the Official Journal of the European Union on 12th July 2024.
The Artificial Intelligence Act aims to improve the functioning of the internal market by setting up a uniform legal framework in particular for the development, the placing on the market, the putting into service, and the use of artificial intelligence systems (AI systems) in the European Union.
The act sets the principles for an ethical, safe, and trustworthy use of AI systems.
Main objectives of the AI Act:
- Approaching the risks created by AI systems;
- Banning AI systems that pose unacceptable risks;
- Establishing criteria for classifying high-risk AI systems;
- Setting clear requirements that high-risk AI applications must meet;
- Defining specific obligations imposed on deployers and providers of high-risk AI applications;
- Assessing the compliance of an AI system before it becomes operational or is made available to the public;
- Ensuring applicability measures for AI systems already introduced to the market;
- Establishing a governance framework at both European and national levels.
The AI Act establishes a classification of AI systems based on risks. As such the AI Act identifies the following risk categories:
- Unacceptable risk AI - will be prohibited at the level of the European Union (e.g., behavioral-cognitive manipulation, predictive policing activities, emotion recognition at workplaces and educational institutions, as well as social behavior evaluation).
- High-risk AI (e.g. AI used in university admissions, medical applications, or finance), is subject to several obligations, including:
- assessing and managing all the risks for people subject to its decisions;
- ensuring suitable quality data to train the AI system;
- documenting all the technical properties of the AI, keeping records of the events and incidents, logging the decision-making process, and allowing human intervention;
- transparency towards persons affected through notices and disclaimers and towards authorities by registering the application in a database.
- Limited Risk AI (e.g. chatbots and AI-generated photos and videos) is subject to an obligation of transparency. Businesses should inform the persons affected that they are interacting with a machine or machine-generated content.
- Minimal or Non-existent Risks (e.g. a spam filter). Most AI systems pose no risks. They can be used without restrictions and will not be affected by the EU Artificial Intelligence Act.
Furthermore, the AI Act sets a list of obligations for both AI technologies “providers” and “deployers” (businesses that use AI). The extent of the providers and deployers’ obligations depends on the level of risk that is attributed to the AI system or model.
Special attention is given by the AI Act to general-purpose AI models (AI systems that have a wide range of possible uses, both intended and unintended by the developers. These AI systems can be applied to many different tasks in various fields. General-purpose AI models are typically trained on large amounts of data, through various methods, such as self-supervised, unsupervised or reinforcement learning. Examples of general purpose AI systems are Llama, Gemma, Chat GPT, Chinchilla etc.).
These AI systems will require their providers to:
- draw up and maintain technical documentation,
- provide information on the capabilities and limitations of the model, and
- put in place a policy to comply with copyright and related rights.
General-purpose AI models with systemic risk are subject to additional obligations such as:
- performing model evaluations and assessing and mitigating possible systemic risks and
- reporting serious incidents to the competent authorities.
The provision of AI Act become applicable as follows:
- Starting with February 1st, 2025, businesses can no longer use unacceptable risk AI;
- Starting with August 1st, 2025, obligations for general-purpose AI governance become applicable;
- Starting with August 1st, 2026, most of the obligations of the Act become applicable, including some of the obligations for high-risk AI systems;
- Starting with August 1st, 2027, the other obligations for high-risk AI systems will apply.
Businesses that fail to comply with the obligations of the AI Act risk fines, of up to 35M EUR or 7% of their global turnover.
The AI Act establishes a governance framework both at European and national levels, namely, as per the regulation the following authorities will act in respect of AI supervision:
- European Commission;
- AI Office - contributing to the implementation, monitoring and supervision of AI systems and general-purpose AI models, and AI governance;
- national competent authorities.
Businesses developing AI systems for the EU market or utilizing AI systems in their client services should implement compliance measures from the outset, such as implementing technical and organizational security measures, ensuring human oversight, and meeting transparency requirements regarding AI usage.
Businesses should evaluate their role and the risk level associated with their AI systems to implement measures that enhance transparency, accountability, and risk mitigation.
The new regulations impose a much stricter need for businesses to cartograph and map the data used in respect of the activities performed. By data in this context we envisage not only non-personal data subject to various other European legal acts (for example DORA, NIS2, European Data Act) but also personal data subject to GDPR.
Our team of specialized lawyers can assist in the long journey that complying with the European legislation on Tech&Data may imply.