EU AI Act Risk Categories Explained With Examples
The European Union’s Artificial Intelligence Act (AI Act) establishes a regulatory framework based on a tiered risk approach, a design principle that seeks to balance innovation with the protection of fundamental rights, democracy, and public safety. This risk-based model is the cornerstone of the legislation, moving away from a technology-neutral or sector-specific stance to a horizontal regulation that classifies AI systems according to their potential for harm. Understanding these categories is not merely an academic exercise; it is a foundational requirement for any entity developing, deploying, importing, or distributing AI systems within the EU market. The classification determines the applicable legal obligations, which can range from a simple transparency requirement to a comprehensive conformity assessment and quality management system. This article provides a detailed analysis of the four risk tiers—unacceptable, high, limited, and minimal—exploring their legal definitions, practical implications, and real-world examples across various sectors, while also highlighting the interplay between the EU-level regulation and national implementation.
Prohibited AI Systems: The Red Line of Unacceptable Risk
The AI Act’s most stringent category targets AI systems deemed to pose a clear threat to the rights and values enshrined in the EU. These systems are subject to a complete prohibition, meaning they cannot be placed on the market or put into service within the Union. The rationale is that certain practices are so harmful and contrary to European values that no technical safeguards or risk mitigation measures could ever justify their use. The prohibitions, outlined in Article 5 of the Act, are not abstract; they target specific, identifiable use cases.
Cognitive Manipulation and Subliminal Techniques
The Act prohibits AI systems that deploy subliminal techniques beyond a person’s consciousness to materially distort a person’s behaviour in a manner that causes or is reasonably likely to cause physical or psychological harm. This is a high bar. It is not about targeted advertising or persuasive design (which fall under other provisions). It is about techniques that bypass conscious rational deliberation. For example, an AI system embedded in a social media platform that analyses a user’s emotional state in real-time through facial recognition and micro-expressions, and then subtly alters the content feed to induce a state of severe anxiety or depression to keep the user engaged, could fall under this prohibition. The key elements are the subliminal nature of the intervention and the likelihood of significant harm.
Exploitation of Vulnerabilities
Another prohibited category involves AI systems that exploit the vulnerabilities of a specific group of persons due to their age or disability, with the intention to distort their behaviour in a way that is likely to cause them harm. While educational or therapeutic tools designed for children or individuals with specific disabilities are not inherently prohibited, an AI system designed to, for instance, manipulate elderly individuals with early-stage dementia into making detrimental financial decisions would be illegal. The focus is on the intent to exploit and the potential for harm. A national data protection authority might also scrutinise such systems under the GDPR, but the AI Act provides an additional, market-access level prohibition.
Social Scoring and Biometric Categorisation
The Act explicitly bans government-led social scoring systems. Article 5(1)(c) prohibits AI systems that evaluate or classify individuals or groups based on their social behaviour or personal characteristics, leading to detrimental or unfavourable treatment. This is distinct from private sector loyalty programmes, but the Act also places restrictions on biometric categorisation systems. Specifically, it prohibits the use of AI systems to categorise individuals based on biometric data to infer or detect sensitive attributes like race, political opinions, union membership, religious beliefs, or sexual orientation. This prohibition applies to both public and private actors, with limited exceptions for law enforcement. An AI system in a public space that uses facial recognition to identify individuals and assign them to demographic categories for targeted advertising or public sentiment analysis would be a clear violation.
Real-Time Remote Biometric Identification in Publicly Accessible Spaces
Perhaps the most discussed prohibition concerns the use of real-time remote biometric identification (RBI) in publicly accessible spaces for law enforcement purposes. The AI Act establishes a general prohibition, but carves out a strictly defined exception. This exception is only for the targeted search for specific victims of crime, the prevention of a specific and imminent terrorist threat, or the localisation of a person suspected of having committed one of the serious crimes listed in the Act (e.g., murder, rape, human trafficking). Even then, deployment requires judicial authorisation and is subject to stringent safeguards. Post-event analysis of recorded footage is not covered by this prohibition, but it remains subject to high-risk rules and GDPR. This creates a significant divergence from some national approaches; for example, the use of live facial recognition by police in certain parts of the United Kingdom has faced legal challenges, and the EU’s framework now provides a clear, albeit restrictive, legal basis for such use under specific conditions.
High-Risk AI Systems: The Core of the Regulatory Framework
The high-risk category constitutes the central pillar of the AI Act. It encompasses AI systems used in critical areas where a malfunction or an inappropriate use could lead to significant harm. These systems are not banned, but they are subject to a rigorous set of obligations before they can enter the EU market or be put into service. The Act distinguishes between two types of high-risk systems:
- AI systems that are a safety component of a product, or are themselves a product, covered by existing EU safety legislation (e.g., machinery, medical devices, vehicles).
- AI systems falling into specific, listed high-risk use cases in Annex III, which include areas like biometrics, critical infrastructure, employment, education, and access to essential services.
It is crucial to note that an AI system listed in Annex III is only high-risk if it is used in a way that could lead to significant harm. A provider can demonstrate that their specific system does not pose a significant risk, but this requires a robust justification and documentation process.
Core Obligations for High-Risk AI Providers
The obligations for providers of high-risk AI systems are extensive and form a comprehensive quality and risk management framework. These are not mere suggestions; they are legal requirements for market access.
Risk Management System
Providers must establish, implement, document, and maintain a risk management system throughout the entire lifecycle of the AI system. This is a continuous process of identifying, analysing, evaluating, and mitigating risks. It must cover both the risks to health and safety and the risks to fundamental rights. For example, a provider of an AI system for recruitment must assess not only the risk of the system failing to identify a qualified candidate but also the risk of it discriminating against candidates based on protected characteristics.
Data and Data Governance
The quality of the data used to train, validate, and test the AI system is paramount. The Act requires that training, validation, and testing data sets be relevant, representative, free of errors, and complete. They must also be subject to appropriate data governance practices, considering the evaluation of possible biases. For a biometric identification system, this means the training data must be sufficiently diverse across different demographic groups to avoid discriminatory performance. This obligation directly links the AI Act to the GDPR’s principles of data quality and non-discrimination.
Technical Documentation and Record-Keeping
Providers must draw up technical documentation before placing the system on the market. This documentation must demonstrate compliance with the Act’s requirements and enable authorities to assess it. It’s akin to a “design dossier” for a complex machine. Furthermore, providers must automatically log events (logs) throughout the system’s lifecycle to ensure traceability. For a high-risk AI system used in credit scoring, the logs must allow a human reviewer to understand the factors that led to a specific decision to reject an applicant.
Transparency and Provision of Information
While the system is in operation, it must be transparent enough to allow deployers (the entities using the AI) to understand its capabilities and limitations. Providers must ensure the system is accompanied by clear instructions for use, including its intended purpose, level of accuracy, and known limitations. For an AI system assisting a doctor in diagnosing cancer from medical images, the instructions must clearly state the system’s accuracy rate, the types of images it works best with, and situations where a human specialist’s review is essential.
Human Oversight
High-risk AI systems must be designed to enable effective human oversight. This is not just a recommendation; it is a design requirement. The goal is to prevent or minimise risks to health, safety, or fundamental rights. The human overseer must be in a position to understand the system’s capacities and limitations, to correctly interpret its output, and to override or ignore the system’s decision. For an AI system used in a critical infrastructure context (e.g., managing a power grid), the human operator must be able to intervene at any time.
Accuracy, Robustness, and Cybersecurity
The system must achieve an appropriate level of accuracy, robustness, and cybersecurity. It must perform consistently as intended and be resilient against errors, faults, or inconsistencies. This also means it must be protected against attempts to alter its use or output by third parties (e.g., adversarial attacks). A provider of an AI system for autonomous driving must prove its performance under a wide range of weather and traffic conditions and demonstrate its resilience against hacking attempts.
Conformity Assessment and the Role of Notified Bodies
Before a high-risk AI system can be placed on the market, it must undergo a conformity assessment procedure to verify that it complies with the Act. For AI systems that are also safety components of products covered by other EU legislation (e.g., a medical device), the conformity assessment can be integrated into the existing procedure for that product. For standalone high-risk AI systems listed in Annex III, the provider can perform the assessment internally, unless the system is used for biometric identification or categorisation, in which case it must involve a third-party Notified Body. A Notified Body is an independent organisation designated by a national authority to assess the conformity of products before they are placed on the market. This introduces an external check on the most sensitive systems.
Practical Examples Across Sectors
Biometrics: An AI system for remote biometric identification (post-event) is high-risk. So is a system for biometric categorisation (e.g., inferring emotions from facial expressions in a recruitment context). A provider of such a system must comply with all obligations, including rigorous testing for bias across different ethnic groups.
Critical Infrastructure: An AI system used to manage the traffic flow in a major city is high-risk. A failure could cause accidents or gridlock. The provider must ensure robustness against sensor failures and cyberattacks, and the city deployer must ensure human traffic managers can intervene.
Employment and Workers Management: An AI tool used to screen CVs and rank job applicants is high-risk. The provider must document the data used to train the model to show it is representative and free from historical biases. The employer using the tool must ensure it does not automatically reject candidates from a protected group and must have a human in the loop to make the final hiring decision.
Education and Vocational Training: An AI system that assesses students for university admissions is high-risk. The provider must demonstrate the system’s accuracy and robustness, and the educational institution must use it in a way that allows for human review of borderline or contested cases. The risk of an AI system perpetuating socio-economic biases in education is a key concern.
Access to Essential Services: AI systems used to assess creditworthiness or determine eligibility for public benefits are high-risk. These are areas with a direct and significant impact on people’s lives. The provider must ensure the system is transparent and explainable enough for a loan officer or a social worker to understand its decisions. The deployer (e.g., a bank or a public agency) must ensure the system is not the sole basis for a negative decision affecting an individual.
Limited-Risk AI Systems: The Transparency Layer
The limited-risk category, sometimes referred to as “specific transparency obligations,” applies to AI systems where a potential risk exists, primarily related to a lack of knowledge or manipulation by the user. The obligations here are much lighter than for high-risk systems and focus almost exclusively on ensuring that individuals are aware that they are interacting with an AI system or that content is AI-generated. This is about preserving human autonomy and preventing deception.
Obligations for Deployers and Providers
The primary obligation falls on the deployer of the AI system, i.e., the person or organisation using it in their professional capacity. They must ensure that the end-user is informed. For providers, the obligation is to design the system in a way that enables this transparency. The most well-known provisions are:
- Interaction with Humans: When an AI system (like a chatbot) interacts with humans, the fact that the user is communicating with a machine must be clearly disclosed. This is to prevent deception, for example, by a company pretending its customer service is handled by a human when it is an AI.
- Deep Fakes and AI-Generated Content: AI-generated or manipulated image, audio, or video content (deep fakes) must be disclosed as such, unless it is for legitimate purposes like satire or artistic expression where the freedom of expression prevails. This is crucial for combating disinformation and fraud.
- Emotion Recognition or Biometric Categorisation: When an AI system is used to recognise or categorise emotions, or to infer other personal characteristics, the individuals concerned must be informed about the operation of the system.
- AI-Generated Text: Text that is generated by an AI system and published with the purpose of informing the public on matters of public interest must be marked as artificially generated. This applies to news articles, reports, etc., but not to emails or general business communication.
Practical Examples
Customer Service Chatbots: A bank uses an AI-powered chatbot on its website to answer customer queries. The chatbot must clearly introduce itself as a virtual assistant, not a human. Failure to do so would be a breach of the Act.
Entertainment and Media: A film studio uses AI to create a realistic-looking video of a historical figure giving a speech. This video must be clearly labelled as AI-generated when distributed. A satirical cartoon that uses AI to exaggerate features would likely be exempt.
Recruitment and HR: A company uses an AI system to analyse video interviews and assess candidates’ emotional states. The company must inform the candidates that their emotions are being analysed by an AI system.
Generative AI Platforms: Providers of foundation models or general-purpose AI systems (like those powering image or text generators) must ensure their outputs are marked in a machine-readable format, allowing downstream systems to detect AI-generated content. This is a technical obligation for the provider, which then enables the end-user (e.g., a news website) to comply with their disclosure duties.
Minimal-Risk AI Systems: The Default Category
AI systems that do not fall into the other three categories are considered minimal-risk. This is the default category and includes the vast majority of AI applications currently used or under development in the EU. Examples include AI-enabled video games, spam filters, inventory management systems, or AI for predicting energy demand in a factory. The AI Act imposes no mandatory legal obligations on these systems. The goal is to avoid placing an unnecessary regulatory burden on low-risk applications.
The Voluntary Code of Conduct
While not mandatory, the AI Act encourages the development and promotion of a voluntary code of conduct for minimal-risk AI systems. This code would be developed by industry stakeholders, civil society, and other interested parties. By adhering to such a code, providers of minimal-risk AI could demonstrate that they are taking a responsible approach, for instance, by conducting fundamental rights impact assessments or applying design principles for trustworthy AI, even when not legally required to do so. This creates a pathway for ethical innovation and can serve as a competitive advantage.
Interaction with National Law and Sector-Specific Rules
The AI Act is a regulation, meaning it is directly applicable in all EU Member States without the need for national governments to transpose it into national law. However, it does not exist in a vacuum. It harmonises the core rules for AI systems but leaves room for national regulations in specific areas, provided they are compatible with the Act. For example, Member States can introduce stricter rules on the use of AI systems by law enforcement authorities, provided they respect fundamental rights and the rule of law. The Act also establishes or designates national authorities to supervise its implementation, leading to a network of regulators across Europe who will need to cooperate closely. The relationship with the GDPR is particularly important; the AI Act complements the GDPR, and a single AI system can be subject to both. For instance, a high-risk AI system processing personal data must comply with both the AI Act’s technical and governance requirements and the GDPR’s principles of lawfulness, fairness, and data minimisation.
