< All Topics
Print

Glossary: Core Terms for EU AI, Robotics, and Biotech Compliance

Understanding the regulatory landscape for advanced technologies in Europe requires more than a passing familiarity with legislative texts; it demands a precise grasp of the terminology that underpins legal obligations, technical standards, and product lifecycles. Professionals working at the intersection of artificial intelligence, robotics, and biotechnology face a convergence of frameworks, each with its own lexicon. The European Union’s approach to regulating these sectors is characterized by a move from fragmented directives to harmonized regulations, yet the specific terms used within these laws carry distinct legal weights and practical implications. This analysis serves as a practical glossary, dissecting core terms from the EU AI Act, GDPR, the Product Safety and Market Surveillance Package (PSP), the Medical Device Regulation (MDR) and In Vitro Diagnostic Regulation (IVDR), and relevant robotics standards. By clarifying definitions, exposing common misunderstandings, and linking terms to compliance workflows, we aim to provide a robust reference for legal analysts, engineers, and compliance officers operating in Europe.

Foundational Concepts in the EU AI Act

The EU Artificial Intelligence Act (Regulation (EU) 2024/1689) establishes a horizontal legal framework for AI systems. Its architecture relies on specific definitions that determine the scope of application and the level of scrutiny applied to AI applications.

‘AI System’

The definition of an AI System is the cornerstone of the regulation. Unlike the broader OECD definition, the EU AI Act adopts a technical and functional characterization. It refers to a machine-based system that is designed to operate with varying levels of autonomy and that, for explicit or implicit objectives, infers how to generate outputs such as predictions, recommendations, or decisions that can influence physical or virtual environments.

Why it matters: Many organizations mistakenly believe that if they are using standard software, they fall outside the scope. However, the inclusion of “inference” and “autonomy” captures a wide range of machine learning applications, including simple regression models used in marketing or logistics. If a system uses data to learn patterns and make decisions that affect the environment (even a digital one), it likely qualifies as an AI system under the Act. Compliance obligations, such as risk management and data governance, trigger at this classification level.

‘General Purpose AI’ (GPAI) and ‘GPAI with Systemic Risk’

While traditional AI systems are often built for a specific purpose, General Purpose AI (GPAI) models are designed to be integrated into a wide variety of downstream applications. The Act distinguishes between GPAI models and those presenting Systemic Risks.

Systemic Risk is defined based on the high-impact capabilities of the model, which could be effectively used to cause large-scale harm. The designation is initially presumed for models meeting specific computational thresholds (e.g., training compute used for training the model exceeding 10^25 FLOPs), but the European AI Office can designate models as systemic based on other criteria, such as the number of end-users.

Practical Distinction: A developer of a foundational large language model (LLM) is a provider of a GPAI model. If that model meets the systemic risk threshold, they face additional obligations, including conducting model evaluations, assessing and mitigating systemic risks, and reporting serious incidents to the European AI Office. A company simply using that LLM via an API to power a chatbot on their website is likely a deployer of an AI system (and not the provider of the GPAI model), subject to different obligations focused on human oversight and transparency.

‘High-Risk AI System’

Not all AI systems are treated equally. An High-Risk AI System is defined in Article 6 and detailed in Annex III. It includes AI systems used as safety components of products (subject to EU harmonization legislation, such as machinery or medical devices) or AI systems falling into specific high-risk use cases (e.g., critical infrastructure, education, employment, essential private/public services, law enforcement, migration).

Common Misunderstanding: It is not enough for an AI system to be “high-risk” in a colloquial sense (e.g., a high-stakes financial trading algorithm). It must fall within the specific categories listed in Annex III and not be considered low-risk by meeting strict criteria (e.g., performing a narrow procedural task). If an AI system is a safety component of a product regulated by other EU legislation (like the MDR), it is automatically high-risk, regardless of the specific use case listed in Annex III.

‘Prohibited AI Practices’

These are specific manipulative or surveillance techniques that are considered unacceptable risks to fundamental rights. Examples include subliminal techniques intended to distort behavior, biometric categorization systems inferring sensitive attributes (race, political opinions) in publicly accessible spaces, and social scoring by public authorities.

Compliance Note: Unlike high-risk systems, which can be deployed if they meet conformity assessments, prohibited practices are banned entirely. Organizations must audit their AI portfolios to ensure no legacy systems or experimental features fall into these categories by August 2025.

Data Governance and Privacy: The GDPR Nexus

The AI Act explicitly references data governance requirements for high-risk AI systems, creating a direct link to the General Data Protection Regulation (GDPR). Compliance requires understanding how these regimes overlap.

‘Personal Data’ and ‘Special Categories of Data’

Under GDPR (Article 4), Personal Data is any information relating to an identified or identifiable natural person. Special Categories (Article 9) include data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, genetic data, biometric data for the purpose of uniquely identifying a natural person, health data, and data concerning a person’s sex life or sexual orientation.

Intersection with AI: The AI Act mandates that training, validation, and testing data sets shall be relevant, representative, free of errors, and complete. If an AI system is trained on personal data, the data processing must be lawful under GDPR (e.g., valid consent or legitimate interest). If the data includes special categories, processing is prohibited unless a specific exemption applies (e.g., explicit consent for medical research). Biases in training data often stem from under-representation of specific demographics; correcting these biases is a technical requirement under the AI Act and a fairness requirement under GDPR.

‘Profiling’

GDPR defines Profiling as any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyze or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location or movements.

Legal Consequence: If an AI system performs profiling, it triggers the right of the data subject not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her (Article 22). This necessitates human intervention and the ability to contest decisions, a requirement that aligns with the “human oversight” obligation in the AI Act for high-risk systems.

‘Data Protection Impact Assessment’ (DPIA)

A DPIA is a process designed to describe the processing of personal data, assess its necessity and proportionality, and manage the risks to the rights and freedoms of natural persons resulting from the processing.

Operational Synergy: When deploying a high-risk AI system that processes personal data, the deployer must conduct a DPIA under GDPR. This assessment should be integrated with the Fundamental Rights Impact Assessment (FRIA) required by the AI Act for certain high-risk systems (specifically those used by public bodies or those listed in Annex III, with the exception of law enforcement). The FRIA focuses more broadly on potential discrimination, equality, and health impacts, but the data gathering and risk analysis steps are largely complementary.

Product Safety and Market Surveillance Framework

The EU AI Act is often described as a “product safety” law for AI. It integrates with the existing “New Legislative Framework” (NLF) for product safety.

‘Harmonisation Legislation’ and ‘Safety Component’

Harmonisation Legislation refers to the suite of EU directives and regulations that set out technical specifications for products to ensure the free movement of goods within the Single Market (e.g., Machinery Directive, Low Voltage Directive, Medical Devices Regulation). An AI system that is a Safety Component of one of these products is automatically classified as high-risk.

Example: An AI algorithm controlling the braking system of a car (regulated under vehicle type approval) or an AI analyzing X-rays in a medical imaging device (regulated under MDR) is a safety component. The conformity assessment procedure for the main product must now include the AI system’s compliance with the AI Act.

‘CE Marking’ and ‘Conformity Assessment’

The CE Marking signifies that a product has been assessed by the manufacturer and deemed to meet EU safety, health, and environmental protection requirements. Conformity Assessment is the procedure demonstrating that the product satisfies the applicable requirements.

Shift in Responsibility: For high-risk AI systems, the manufacturer (or provider) must follow a conformity assessment procedure. Depending on the risk level, this can be done via self-assessment (internal control) or requires the involvement of a Notified Body. Notified Bodies are independent third-party organizations designated by EU Member States to assess the conformity of products before they are placed on the market. The AI Act expands the scope of Notified Bodies to cover high-risk AI systems.

‘Post-Market Surveillance’ (PMS)

PMS refers to the systems and procedures by which a manufacturer monitors the performance of a product after it has been placed on the market. Under the AI Act, providers of high-risk AI systems must establish a Post-Market Monitoring System to actively collect experience and feedback from the use of their AI systems.

Practical Implementation: This is not passive monitoring. It involves systematically logging performance metrics, user reports, and potential “serious incidents” (defined as an incident leading to death or serious harm to health, serious and irreversible disruption of the management of critical infrastructure, or a violation of fundamental rights). These logs must be reported to national authorities.

Medical Devices and In Vitro Diagnostics (MDR/IVDR)

The medical sector is a high-intensity environment for AI. The transition from the Medical Device Directive (MDD) to the Medical Device Regulation (MDR) and In Vitro Diagnostic Regulation (IVDR) has significantly tightened requirements, particularly for software.

‘Software as a Medical Device’ (SaMD)

SaMD is software intended to be used for one or more medical purposes (diagnosis, prevention, monitoring, treatment, alleviation of disease) without being part of a hardware medical device.

Classification Complexity: Under MDR/IVDR, devices are classified based on risk (Class I, IIa, IIb, III). SaMD using AI often falls into higher classes (IIb or III) if it makes diagnostic or therapeutic decisions. For example, an AI that detects lung nodules in CT scans is likely Class IIb or III. This classification dictates the involvement of a Notified Body and the rigour of clinical evaluation.

‘Clinical Evaluation’ and ‘Clinical Investigation’

Clinical Evaluation is a methodical process for assessing the clinical data of a medical device to verify its safety, performance, and benefit-risk ratio. Clinical Investigation is a systematic investigation performed on human subjects to evaluate the device’s safety and performance.

AI Specific Challenge: Traditional clinical trials are often static. AI systems “learn” and can change behavior (model drift). Therefore, the MDR/IVDR requires that software based on machine learning (machine learning-based “self-learning”) undergo a review of the “pre-determined change control plan” (PCCP). This allows the manufacturer to update the AI within agreed parameters without needing a new conformity assessment for every minor update, provided the safety profile remains valid.

‘Performance Study’ (IVDR)

Specifically for In Vitro Diagnostics, the IVDR uses the term Performance Study rather than clinical investigation. This refers to studies performed on human specimens to establish or verify the analytical and clinical performance.

Regulatory Distinction: For AI in pathology (digital pathology), the distinction between “research use only” and “IVD” is critical. An AI tool analyzing tissue samples for research purposes falls under the GDPR and research ethics frameworks. Once the manufacturer intends the tool to be used for diagnosing patients, it becomes an IVD, triggering the IVDR requirements for performance studies, quality management (ISO 13485), and PMS.

Robotics and Machinery Standards

While the AI Act regulates the “brain” of a robot, the machinery and robotics directives regulate the physical body and its interaction with the environment.

‘Machinery’ and ‘Robot’

Under the Machinery Regulation (EU) 2023/1230 (replacing the Machinery Directive), Machinery includes an assembly of linked parts or components, at least one of which moves, fitted with a drive system. A Robot is generally considered a type of machinery.

Collaborative Robotics: A key term is Collaborative Operation. This refers to a robot designed to function in direct cooperation with a human within a defined workspace. This contrasts with traditional industrial robots that operate in cages. Collaborative robots must undergo a specific risk assessment to ensure safety, often utilizing force-limiting sensors and safety-rated monitored stops.

‘Essential Health and Safety Requirements’ (EHSRs)

The Machinery Regulation lists EHSRs that manufacturers must satisfy to place machinery on the market. These cover design and construction aspects, such as safety of control systems, protection against hazards, and specific hazards generated by the machine.

Integration with AI: If an AI system controls the machinery (e.g., a vision system guiding a robot arm), the AI must be verified to ensure it does not compromise the EHSRs. For example, an AI vision system used for safety (e.g., detecting a human in a danger zone) is a safety component and must meet high reliability standards (SIL/PL levels).

‘Cybersecurity’

Both the AI Act and the Machinery Regulation explicitly mention Cybersecurity. The AI Act requires high-risk AI systems to be resilient against attempts by unauthorized third parties to alter their use or performance (e.g., adversarial attacks, data poisoning).

Compliance Requirement: Manufacturers must implement security measures (encryption, access controls, adversarial testing) throughout the lifecycle. This is no longer just an IT concern but a core product safety requirement. Failure to secure an AI-enabled robot against hacking could lead to it being deemed non-compliant machinery.

European Standardization (CEN-CENELEC) and Harmonized Standards

Legislation sets the “what” (essential requirements); standards set the “how” (technical specifications). Using Harmonized Standards provides a presumption of conformity with the law.

‘Presumption of Conformity’

If a manufacturer applies a harmonized standard, their product is presumed to meet the essential requirements of the relevant legislation. This is a powerful legal tool that shifts the burden of proof.

Current Status: For the AI Act, standardization requests have been issued to CEN-CENELEC. As of early 2025, specific harmonized standards for AI are still under development. Manufacturers may currently rely on standards like ISO/IEC 23894 (Risk Management) or ISO/IEC 42001 (AI Management Systems) to demonstrate best practices, but they cannot yet claim a presumption of conformity under the AI Act until the EU publishes references in the Official Journal.

‘State of the Art’

The term State of the Art appears frequently in regulations. It refers to the developed stage of technical capability and commercial solutions based on scientific knowledge at a given time. It is not a static concept.

Dynamic Obligation: For AI systems, maintaining the “state of the art” implies a continuous obligation to update security patches, retrain models to prevent drift, and adapt to new adversarial threats. A system compliant in 2025 may be non-compliant in 2027 if it fails to evolve with the state of the art.

Liability and the “Provider” vs. “Deployer” Distinction

Understanding who is responsible is critical for insurance and legal recourse. The AI Act and Product Liability Directive (PLD) use specific terminology.

‘Provider’ (Manufacturer)

The Provider is the natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model with a view to placing it on the market or putting it into service under their own name or trademark.

Supply Chain Risk: The provider bears the primary compliance burden (conformity assessment, technical documentation, quality management). If an AI system is developed by a third-party vendor, the vendor is the provider. The company using the tool is the deployer. However, if a deployer substantially modifies an AI system (changing its intended purpose or performance), they may be reclassified as a provider, assuming all liabilities.

‘Deployer’ (User)

A Deployer** is a natural or legal person, public authority

Table of Contents
Go to Top