AI Act and Algorithmic Decisions: What Is Actually Regulated
The regulatory landscape for artificial intelligence in Europe has reached a critical inflection point. With the formal adoption and entry into force of Regulation (EU) 2024/1689, commonly known as the AI Act, the European Union has established the world’s first comprehensive legal framework specifically targeting AI systems. This legislation fundamentally alters how organizations design, deploy, and monitor algorithmic decision-making processes. The core objective is not to stifle innovation but to ensure that AI-driven decisions—particularly those affecting fundamental rights, safety, and democratic processes—are transparent, traceable, and subject to meaningful human oversight. For professionals working in AI development, robotics, biotechnology, and data systems, understanding the precise scope of what is regulated is no longer a theoretical exercise; it is an operational necessity.
The Act applies a risk-based approach, categorizing AI systems into four distinct tiers: unacceptable risk, high-risk, limited risk, and minimal or no risk. This stratification dictates the level of scrutiny and compliance obligations attached to each application. The most significant regulatory weight falls upon high-risk AI systems and those deemed unacceptable. However, the definition of what constitutes an “AI system” under the Act is broad and technology-neutral, encompassing systems based on machine learning, logic-based approaches, and statistical methods. This means that a wide array of algorithmic decision-making tools, from recruitment software to medical diagnostics and critical infrastructure management, fall squarely within the regulator’s purview.
Defining the Scope: What Constitutes an AI System?
Before dissecting the risk categories, it is essential to understand the Act’s definition of an AI system, as this determines the very applicability of the regulation. Article 2 and Annex I of the AI Act provide this definition, which is intentionally aligned with the OECD definition of AI to ensure a degree of international interoperability. An AI system is defined as a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
This definition is crucial because it distinguishes AI systems from simpler software systems. The key elements are autonomy, adaptiveness, and the capacity to infer from data to produce outputs that influence the environment. A traditional rule-based software that executes pre-defined commands without learning or adapting is likely not an AI system under the Act. However, a system that uses statistical methods to classify data or a machine learning model that predicts outcomes based on training data is clearly captured. This distinction is not always straightforward in practice, and regulatory sandboxes and guidance from the AI Office will be critical in clarifying these boundaries.
The Territorial and Material Scope
The AI Act has an extraterritorial reach similar to the GDPR. It applies not only to AI systems placed on the market or put into service within the EU, but also to providers (developers) and deployers (users) established in third countries if the AI system’s output is used in the EU. This means that a company in the United States or Asia providing an AI-powered recruitment tool to a European subsidiary must comply with the AI Act. The regulation also applies to providers and deployers in the EU, regardless of where the AI system is developed or deployed.
There are specific exclusions. The Act does not apply to AI systems developed or used exclusively for military, defense, or national security purposes. It also excludes AI systems used for the sole purpose of research and innovation, provided they do not place products on the market. Furthermore, there are specific derogations for open-source AI systems, though these are limited and generally do not apply to high-risk AI systems or general-purpose AI (GPAI) models with systemic risk.
Unacceptable Risk: Practices That Are Prohibited
The AI Act establishes a blacklist of AI practices that are considered a threat to fundamental rights and democratic values. These practices are prohibited entirely and cannot be deployed in the EU market or used by public authorities or private actors. The prohibitions, outlined in Article 5, target specific manipulative techniques and forms of social control.
Subliminal Manipulation and Exploitation of Vulnerabilities
The Act prohibits AI systems that deploy subliminal techniques beyond a person’s consciousness to materially distort a person’s behavior in a manner that causes or is reasonably likely to cause physical or psychological harm. This is a high threshold. It is not about targeted advertising or nudging; it is about techniques that manipulate decision-making without the individual’s awareness, leading to harm. Similarly, exploiting the vulnerabilities of a specific group of persons due to their age or disability is prohibited if the intent is to distort their behavior in a way that causes harm.
Social Scoring and Untargeted Scraping
Public authorities are prohibited from using AI systems for social scoring, defined as evaluating or classifying the trustworthiness of natural persons based on their social behavior or personal characteristics, where this leads to detrimental or unfavorable treatment. The Act makes a distinction: social scoring by private entities is not prohibited per se, but if it leads to detrimental treatment in contexts unrelated to the original data collection, it may be considered a high-risk system and subject to strict obligations. The Act also prohibits the use of AI systems to create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage. This directly targets the practice of building biometric datasets without consent.
Emotion Recognition and Biometric Categorization
Article 5 prohibits the use of AI systems for emotion recognition in workplaces and educational institutions, with a narrow exception for safety or medical purposes (e.g., monitoring a driver’s fatigue). It also prohibits biometric categorization systems that categorize individuals based on biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation. The use of “real-time” remote biometric identification (RBI) in publicly accessible spaces is also prohibited, with very limited exceptions for law enforcement (e.g., searching for a missing person or preventing an imminent terrorist threat), which require prior judicial authorization.
Legal Interpretation: The prohibition on emotion recognition in workplaces is a significant development for HR tech and employee monitoring solutions. Companies developing tools to analyze employee sentiment or stress levels from facial expressions or voice tones must pivot their use cases or face a complete ban on placing such systems on the EU market for this purpose.
High-Risk AI Systems: The Core Regulatory Obligations
The most substantial compliance burden under the AI Act falls on high-risk AI systems. These are not prohibited, but their entry into the market is conditional on meeting a rigorous set of requirements. The identification of a system as “high-risk” follows a two-step logic defined in Article 6 and Annex III.
Step 1: Is it a Regulated Product?
First, the AI system must be intended as a safety component of a product, or the AI system itself is a product, that is covered by specific EU harmonization legislation listed in Annex II. This includes machinery, medical devices, vehicles, lifts, toys, and radio equipment. If the AI system is not a safety component of such a product, it is not considered high-risk under this first condition. For example, an AI system embedded in a medical ventilator would fall under this category because the ventilator is a medical device.
Step 2: Does it Fall into a High-Risk Use Case?
Second, even if not a safety component of a regulated product, an AI system is high-risk if it is listed in Annex III. This annex covers eight specific areas where AI systems are likely to have a significant impact on fundamental rights and safety. These include:
- Critical Infrastructure: AI systems used as safety components in the management and operation of road traffic, water, gas, and electricity supply.
- Education and Vocational Training: Systems used to determine access, assignment, or evaluation in education (e.g., exam proctoring, admissions scoring).
- Employment and Worker Management: AI used for recruitment (e.g., CV-sorting software), promotion, or termination decisions, and for task allocation based on behavior or personality traits.
- Access to Essential Services: Systems used by public authorities to evaluate the creditworthiness of natural persons or to establish their eligibility for public assistance benefits. This includes AI used by banks for credit scoring if it influences a decision on access to essential financial services.
- Law Enforcement and Justice: AI used to assess the reliability of evidence, predict the risk of re-offending, or for polygraphs.
- Migration, Asylum, and Border Control: Systems used to verify travel documents, assess asylum applications, or detect irregular border crossings.
- Administration of Justice and Democratic Processes: AI used to assist a judicial authority in researching and interpreting facts and applying the law, or to influence election outcomes.
Crucially, providers have the ability to “double-override” a system’s classification. If a provider considers an AI system listed in Annex III to be low-risk, they must document their assessment and inform the market surveillance authority. Conversely, if a provider believes an AI system not listed in Annex III is high-risk due to its potential impact, they can classify it as such. This places a significant burden of judgment on the provider.
Practical Examples of High-Risk Classification
To illustrate, consider an AI system used in recruitment. A simple tool that filters CVs based on keywords is likely not high-risk unless it is considered a safety component of a product. However, an AI system that analyzes video interviews to score candidates’ personality traits or emotional state would be classified as high-risk under Annex III, point 1(b), as it is used for employment decisions. Similarly, a bank’s AI-based credit scoring system is high-risk because it affects a person’s access to essential services (Annex III, point 1(a)). A biometric identification system used by a police force to identify suspects from a database is high-risk (Annex III, point 1(e)).
Core Obligations for High-Risk AI Systems
Once a system is classified as high-risk, the provider must comply with a comprehensive list of requirements before it can be placed on the market or put into service. These are not mere formalities; they represent a fundamental shift in how AI systems must be engineered and managed.
Risk Management and Data Governance
Providers must establish a robust, continuous risk management system. This involves identifying, analyzing, and mitigating risks throughout the entire lifecycle of the AI system. It includes specific procedures to handle reasonably foreseeable misuse and emerging risks. Parallel to this is a strict data governance regime. The data used to train, validate, and test the system must be relevant, representative, free of errors, and complete. Special attention must be paid to biases that could lead to discriminatory outcomes, particularly concerning protected characteristics under EU law.
Technical Documentation and Record-Keeping
Providers must draw up technical documentation before placing the system on the market. This documentation must demonstrate compliance with all requirements and provide a basis for assessment by notified bodies and authorities. It is akin to a “design file” for the AI system. Furthermore, the AI system must be designed to automatically record events (‘logs’) throughout its lifecycle. These logs must be sufficient to ensure traceability, enabling post-market monitoring and, if necessary, investigation by authorities. For high-risk AI systems that interact with humans, the logs must also record the time of each interaction.
Transparency and Human Oversight
High-risk AI systems must be designed and developed in a way that ensures their operation is sufficiently transparent to enable deployers to interpret the system’s output and use it appropriately. This includes providing clear and adequate instructions for use. Most importantly, they must be designed to allow for effective human oversight. This is not just a suggestion; it is a design requirement. The goal is to prevent or minimize the risks to health, safety, or fundamental rights that may arise from the use of an AI system. The oversight must be exercisable by a natural person who has the competence, training, and authority to intervene and override the system’s decision where necessary.
Accuracy, Robustness, and Cybersecurity
High-risk AI systems must achieve levels of accuracy, robustness, and cybersecurity based on the intended purpose. Performance metrics must be defined and evaluated. The system must be resilient to errors, faults, or inconsistencies that may arise during its operation. It must also be protected against attempts by third parties to alter its use, outputs, or performance through cyberattacks. This requires a security-by-design and privacy-by-design approach.
Conformity Assessment and the Role of Notified Bodies
Before a high-risk AI system can be placed on the market, it must undergo a conformity assessment procedure. The type of procedure depends on the AI system’s classification. For high-risk AI systems that are also safety components of products covered by other EU legislation (e.g., medical devices), the conformity assessment is typically carried out by a third-party conformity assessment body, known as a notified body. For high-risk AI systems listed only in Annex III (and not covered by other legislation), the provider can perform a conformity assessment themselves (internal control), but they must register the system in an EU database first. The AI Act introduces the possibility for national authorities to require a third-party assessment for certain high-risk systems if they present a high risk to health and safety or fundamental rights.
Post-Market Monitoring and Reporting
Compliance does not end once the system is on the market. Providers must establish a post-market monitoring system to actively collect experience from the use of their AI systems. This system must allow them to identify and, if necessary, address any emerging risks or need for corrective actions. If a high-risk AI system presents a risk to health, safety, or fundamental rights, providers have a strict obligation to report serious incidents to the national market surveillance authorities without undue delay. This reporting duty is similar to the one under the GDPR and is critical for enabling rapid regulatory intervention.
General-Purpose AI (GPAI) and Systemic Risk
The AI Act introduces a specific, parallel regime for General-Purpose AI (GPAI) models, recognizing their unique nature and potential for widespread impact. A GPAI model is defined as an AI model that displays significant generality and is capable of competently performing a wide range of distinct tasks, regardless of how it is placed on the market. This category is distinct from “high-risk AI systems” and targets the foundational models that power a multitude of downstream applications.
Two Tiers of Obligations for GPAI Providers
The obligations for GPAI models are tiered. All GPAI providers, regardless of their systemic risk, must meet baseline transparency requirements. This includes:
- Drawing up and maintaining technical documentation.
- Providing information and documentation to downstream providers who intend to integrate the GPAI model into their own high-risk AI systems or other applications.
- Complying with the EU copyright directive, specifically making publicly available training data summaries.
- Appointing a representative in the EU if the provider is established in a third country.
Systemic Risk and Additional Obligations
A subset of GPAI models is classified as having systemic risk. A GPAI model is considered to present systemic risk if it has high-impact capabilities, meaning its capabilities are advanced and could have a significant negative impact on public health, safety, fundamental rights, or society at large. The Act presumes that models with capabilities equivalent to or greater than those listed in Annex XIII (e.g., state-of-the-art models) have systemic risk. The provider must self-assess this and notify the Commission if their model meets the threshold.
Providers of GPAI models with systemic risk are subject to additional obligations, including:
- Performing model evaluations and adversarial testing (red-teaming) to identify and mitigate systemic risks.
- Assessing and mitigating potential systemic risks at the EU level, including through international cooperation where appropriate.
- Implementing a risk management system and ensuring a high level of cybersecurity for the model and its physical infrastructure.
- Reporting on serious incidents to the AI Office and national authorities.
- Ensuring a level of interpretability and explainability of their models.
The AI Office, a new EU-level body, will have a central role in monitoring and supervising the implementation of these rules for GPAI models, especially those with systemic risk.
Interaction with Existing EU Legislation
The AI Act does not operate in a vacuum. It is designed to be part of a coherent legal framework alongside the GDPR, the Product Liability Directive, and sector-specific regulations. Understanding these interactions is vital for legal certainty.
Relationship with the GDPR
There is a significant overlap between the AI Act and the GDPR, particularly concerning automated decision-making. Article 22 of the GDPR grants individuals the right not to be subject to a decision based solely on automated processing that produces legal effects or similarly significantly affects them. The AI Act complements this by imposing design and operational requirements on the systems themselves. For example, the AI Act’s requirement for human oversight directly supports the GDPR’s framework for meaningful human review. When a high-risk AI system is used in a context where Article 22 GDPR applies, the deployer must ensure that the system is designed in a way that allows for such human review. The AI Act’s data governance requirements also echo the GDPR’s principles of data minimization, accuracy, and fairness.
Relationship with the Product Liability Directive
The revised Product Liability Directive (PLD) explicitly includes AI systems within its scope. This means that a provider of a high-risk AI system can be held liable for damages caused by a defective product, just like a manufacturer of a physical product. The AI Act’s requirements for risk management, documentation, and conformity assessment will serve as evidence in liability claims. A failure to comply with the AI Act’s obligations can be used to establish the defectiveness of the AI system under the PLD.
National Implementations and Regulatory Sandboxes
While the AI Act is a Regulation, meaning it is directly applicable across
