< All Topics
Print

EU AI Act Scope and Definitions: What It Covers

Understanding the precise scope and the nuanced definitions within the European Union’s Artificial Intelligence Act (AI Act) is the foundational step for any entity developing, deploying, or distributing AI systems within or into the EU market. The regulation, which entered into force on 1 August 2024, establishes a harmonized framework for the placement of AI systems on the market and their use, aiming to foster trustworthy AI while safeguarding fundamental rights, democracy, and environmental safety. For professionals in technology, engineering, law, and compliance, the practical application of the Act hinges on a rigorous interpretation of Article 2 (Material Scope), Article 3 (Definitions), and Article 5 (Prohibited AI Practices). This analysis dissects these elements, moving beyond the text to the operational reality of compliance.

Material Scope: The Geographic and Subjective Reach

The AI Act applies to providers placing AI systems on the EU market or putting them into service in the EU, regardless of whether those providers are established within the Union or in a third country. It also applies to providers and deployers of AI systems that are located in a third country, where the output produced by the system is used in the EU. This extraterritorial application mirrors the GDPR and ensures that the EU market is protected from high-risk AI developed elsewhere but utilized within its borders.

Crucially, the Act distinguishes between providers (those who develop an AI system with a view to placing it on the market or putting it into service under their own name or trademark) and deployers (those using an AI system under their authority, except where the AI system is used in the course of a personal non-professional activity). The burden of conformity assessment generally rests on the provider, while the deployer has obligations regarding human oversight and proper use, particularly for high-risk systems.

There are specific exclusions where the AI Act does not apply, which require careful delineation. It does not apply to AI systems developed or used exclusively for military, defense, or national security purposes, regardless of the type of entity carrying out those activities. This exclusion is significant for the defense industry but requires caution; if a dual-use technology (civilian and military) is developed, the civilian application likely falls under the Act. Furthermore, the Act does not apply to AI systems released under free and open-source licenses, unless they are placed on the market or put into service as a high-risk AI system or as an AI system that falls under the prohibited category (Article 5). This open-source exemption is a key differentiator from the EU’s product safety regimes, though it does not absolve providers of liability if the system becomes high-risk.

Interplay with Existing Legislation

The AI Act is lex specialis, meaning it takes precedence over general product safety legislation where AI is involved. However, it explicitly amends several existing EU harmonization legislations to integrate AI requirements. For example, the Act amends the Machinery Regulation, the Medical Devices Regulation (MDR), and the Aviation Safety Regulation. In these contexts, the AI Act provides the specific requirements for the AI components, while the amended regulations cover the physical product or broader system. This integration is vital for compliance strategies; a medical device incorporating AI must satisfy both the MDR and the AI Act’s high-risk requirements simultaneously.

Defining the “AI System”: A Functional Approach

Perhaps the most debated aspect of the AI Act is the definition of an “AI system.” The final text moved away from a purely academic definition of “artificial intelligence” toward a functional, risk-based definition based on specific techniques and approaches. Article 3(1) defines an AI system as:

“A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

To operationalize this, practitioners should break the definition down into four cumulative criteria:

  1. Procedural nature: It is a “machine-based system,” implying computational processing.
  2. Autonomy and Adaptiveness: It operates with varying levels of autonomy (not necessarily fully autonomous) and may exhibit adaptiveness after deployment (i.e., it can learn from data or experience).
  3. Objective-based: It is designed to achieve explicit or implicit objectives.
  4. Inference and Influence: It infers from inputs how to generate outputs (predictions, content, recommendations, decisions) that influence environments.

Exclusions: What Is Not AI?

The Act provides a “non-exhaustive” list of systems that are not considered AI systems for the purpose of the regulation. This is where engineering reality often meets legal interpretation. The following are excluded:

1. Older Software and Rule-Based Systems

Systems that are based on rules defined solely by natural persons to automatically execute operations are excluded. This is intended to distinguish AI from traditional automation (e.g., a basic calculator, a word processor, or a simple database management system). However, the boundary blurs when “if-then” rules are complex and derived from machine learning. If a system uses ML to generate the rules, it is AI. If a human defines the logic manually, it is likely not.

2. Early-Stage AI and Research Exemptions

AI systems developed exclusively for scientific research and development purposes are excluded. This is a critical protection for academia and R&D labs. However, once a research prototype is “placed on the market” or “put into service” commercially, the exemption vanishes.

3. Military and National Security

As mentioned, AI used for military purposes is excluded. However, this creates a compliance trap for dual-use technologies. A provider selling a surveillance algorithm to a police force (civilian) must comply, while selling the same algorithm to the army (military) might not trigger the Act, depending on the specific contract and context.

4. Specific Exclusions for Transparency

The Act explicitly excludes AI systems used solely for the purpose of improving cybersecurity (cybersecurity resilience) or for the sole purpose of inhibiting the influence of AI systems used to generate deepfakes. This is a narrow but important carve-out to avoid stifling defensive capabilities.

Classifying the Risk: The Hierarchy of Obligations

Once a system is confirmed to be an “AI system” under the Act, it must be categorized based on its risk profile. The Act creates a four-tier pyramid: Unacceptable Risk, High Risk, Limited Risk, and Minimal or No Risk. The obligations attach to the provider (and in some cases the deployer) based on this classification.

Unacceptable Risk: The Prohibited Practices (Article 5)

Article 5 lists AI practices that are considered a clear threat to people’s fundamental rights and are therefore prohibited. These prohibitions apply regardless of the system’s technological sophistication or risk classification. Deploying these systems in the EU is illegal.

The prohibited practices include:

  • Subliminal techniques: AI systems that deploy subliminal techniques beyond a person’s consciousness to materially distort behavior, causing physical or psychological harm.
  • Exploitation of vulnerabilities: AI systems that exploit vulnerabilities of a specific group of persons (due to their age or disability) to distort behavior, causing harm.
  • Social scoring: AI systems that evaluate or classify individuals based on social behavior or personal characteristics, leading to detrimental or unfavorable treatment. Note: This is prohibited if carried out by public authorities. Private entities can use scoring for internal purposes (e.g., credit risk), provided it complies with data protection laws and does not result in discriminatory exclusion from the market.
  • Real-time remote biometric identification (RBI) in publicly accessible spaces: This is the most controversial provision. The use of RBI systems by law enforcement in public spaces is generally prohibited. However, there are strict exceptions: searching for missing persons; preventing specific and imminent terrorist threats; or detecting, locating, identifying, or prosecuting perpetrators of serious crimes (listed in the Annex to the Directive on the European Arrest Warrant). These uses require judicial authorization and are subject to strict time limits and geographic scope.
  • Individual predictive policing: AI systems that assess the risk of a natural person committing a criminal offense based solely on profiling or personality traits.
  • Emotion recognition: AI systems used in workplaces and educational institutions to infer emotions (except for medical or safety reasons, such as monitoring driver fatigue).

Practical Interpretation: For developers of emotion recognition or social scoring AI, the message is clear: the EU market is closed to these applications in their current form. For law enforcement, the window for RBI is narrow and heavily procedural.

High-Risk AI Systems (Article 6)

The core of the AI Act revolves around High-Risk AI systems. These are not banned, but they are subject to strict conformity assessments, data governance, documentation, and oversight. A system is high-risk if it meets two cumulative conditions:

  1. It is a safety component of a product (or is itself a product) covered by existing EU harmonization legislation (e.g., medical devices, machinery, lifts, toys, radio equipment).
  2. It is required to undergo a third-party conformity assessment under that legislation.

Alternatively, the Commission can list specific AI systems as high-risk based on their use. Currently, the Annex III list includes:

  • Critical infrastructure (e.g., water, energy, transport).
  • Educational and vocational training (e.g., grading exams).
  • Safety components of products (e.g., robot-assisted surgery).
  • Employment and worker management (e.g., CV sorting, recruitment).
  • Access to essential private and public services (e.g., credit scoring, benefits eligibility).
  • Law enforcement (e.g., polygraphs, risk assessments).
  • Migration, asylum, and border control (e.g., asylum application verification).
  • Administration of justice and democratic processes (e.g., election influence).

Provider Obligations for High-Risk AI

If you are a provider of a high-risk AI system, you must:

  • Establish a risk management system throughout the lifecycle.
  • Conduct data governance to ensure training, validation, and testing data are relevant, representative, free of errors, and complete.
  • Draw up technical documentation (similar to CE marking documentation).
  • Enable automatic logging of events (logs) to ensure traceability.
  • Ensure transparency and provision of information to deployers.
  • Design systems for human oversight.
  • Achieve a level of accuracy, robustness, and cybersecurity.
  • Implement a quality management system.
  • Undergo conformity assessment (self-assessment for some, third-party for others).
  • Register the system in an EU database.

Limited Risk: Transparency Obligations

AI systems with limited risk (e.g., chatbots, emotion recognition, deepfakes) are subject only to transparency obligations. The user must be informed that they are interacting with an AI system. If an AI generates or manipulates image, audio, or video content (deepfakes), it must be disclosed as artificially generated or manipulated, with exceptions for artistic or satirical content.

National Implementation and Regulatory Sandboxes

While the AI Act is a Regulation (meaning it applies directly in all Member States without needing transposition into national law), Member States must designate national competent authorities and a national supervisory authority. This leads to variations in enforcement culture and resources across the EU.

For instance, Germany has already integrated the AI Act into its existing product safety infrastructure, utilizing the Federal Network Agency (BNetzA) and the Federal Institute for Drugs and Medical Devices (BfArM) depending on the sector. France relies heavily on the CNIL (data protection authority) for oversight, while Ireland leverages its Data Protection Commission alongside enterprise agencies.

Member States are also required to establish Regulatory Sandboxes (Article 53). These are controlled environments where providers can develop, train, and test innovative AI systems under regulatory supervision before placing them on the market. While the AI Act sets the framework, the specific operationalization of these sandboxes varies. Some countries (like Spain and Finland) have been pioneers in operational sandboxes, offering legal certainty and reduced fees. For startups and SMEs, accessing these sandboxes is a strategic pathway to compliance, allowing for real-world testing under the watchful eye of regulators.

Timeline and Phased Application

The AI Act applies in a staggered manner, which requires organizations to plan their compliance roadmap carefully. The timeline is as follows:

  • 6 months (February 2025): Prohibitions on Unacceptable Risk AI systems become applicable.
  • 12 months (August 2025): Codes of Practice become applicable; obligations for General Purpose AI (GPAI) models begin (though the specific obligations for GPAI models with systemic risk are detailed in a later phase).
  • 24 months (August 2026): High-risk AI systems listed in Annex III (e.g., biometrics, critical infrastructure) become subject to the full regime.
  • 36 months (August 2027): High-risk AI systems that are safety components of products regulated by existing harmonization legislation (e.g., medical devices, machinery) become subject to the full regime.

Strategic Note: Although the prohibitions are the first to apply, the complexity of high-risk compliance means that providers of medical or industrial AI must begin their conformity assessments immediately. The “Grandfathering” clause states that AI systems already placed on the market or put into service before the entry into force (August 1, 2024) are only subject to the Act if they undergo significant changes to their intended purpose after August 2026. This provides some breathing room for legacy systems but forces a clear decision point for future updates.

General Purpose AI (GPAI) and Foundation Models

The final compromise introduced specific rules for General Purpose AI (GPAI) models. A GPAI model is an AI model trained on broad data, capable of being adapted to a wide range of tasks. If a GPAI model poses systemic risk (high-impact capabilities widely used, with serious negative consequences for public health, safety, fundamental rights, or society), it faces additional obligations.

These obligations include:

  • Performing model evaluations and adversarial testing.
  • Assessing and mitigating systemic risks.
  • Reporting serious incidents to the European AI Office.
  • Ensuring cybersecurity protection.

For developers of large language models (LLMs) and foundation models, the distinction between a standard GPAI and a systemic risk GPAI is the critical compliance factor. The European AI Office will play a central role in monitoring and designating these models.

Conclusion on Scope and Definitions

Defining the scope of the AI Act is not merely a semantic exercise; it determines the legal regime applicable to a technology stack. The definition of an AI system is broad enough to capture most modern machine learning applications but excludes legacy automation. The risk-based approach places the heaviest burden on high-risk systems, requiring a maturity in engineering and governance that mirrors the automotive or medical device industries. For practitioners, the immediate focus must be on the Article 5 prohibitions and the Article 6 high-risk classification, as these carry the highest legal and financial penalties.

Table of Contents
Go to Top