< All Topics
Print

Regulating AI-Enabled Products: From Robots to Software

The European regulatory landscape for artificial intelligence-enabled products is undergoing its most significant transformation in a generation. For professionals developing, deploying, or procuring systems ranging from collaborative industrial robots to AI-driven diagnostic software, the legal environment is shifting from a fragmented patchwork of directives to a more unified, albeit complex, horizontal and vertical framework. Understanding this framework requires looking beyond the headline-grabbing AI Act to the intricate web of existing product safety legislation, the new horizontal rules on AI, and the specific national implementations that continue to shape market access. This analysis dissects how the European Union regulates these products, focusing on the interplay between product safety, conformity assessment, and the specific risks associated with artificial intelligence.

The Dual-Layer Regulatory Architecture

At its core, the regulation of AI-enabled products in the EU operates on a dual-layer architecture. The first layer is the established system of New Legislative Framework (NLF) product safety legislation. The second, and newer, layer is the Artificial Intelligence Act (AI Act). These are not mutually exclusive; they are designed to function in concert. A product must comply with the relevant product safety regulations (e.g., the Machinery Directive, Medical Device Regulation) and, if it incorporates an AI system, it must also comply with the AI Act. The AI Act does not replace product safety laws; it adds a specific set of obligations for the AI component itself.

This dual structure means that a manufacturer of an AI-enabled medical device, for instance, must navigate two distinct conformity assessment procedures. One concerns the device itself—its biocompatibility, electrical safety, and clinical performance—under the Medical Device Regulation (MDR). The other concerns the AI algorithm powering it—its data governance, transparency, and robustness—under the AI Act. The interaction between these two layers is where the practical complexity lies.

The Foundation: The New Legislative Framework (NLF)

Before diving into the AI Act, it is crucial to understand the NLF. This framework, comprising the Decision No 768/2008/EC and various sector-specific regulations and directives, establishes the common principles for CE marking. For any product falling within its scope, the manufacturer must undertake a conformity assessment, draw up technical documentation, issue an EU declaration of conformity, and affix the CE marking. This framework is the bedrock upon which the AI Act is built. The AI Act leverages these existing mechanisms, often designating the same market surveillance authorities and using the same definitions for “placing on the market” or “making available on the market.”

For AI-enabled products, the relevant NLF legislation is often one of the “New Approach” directives. The choice of directive depends on the product’s primary function and nature. A robotic arm used in a factory falls under the Machinery Regulation. A software application that analyzes medical images falls under the MDR or the In Vitro Diagnostic Medical Devices Regulation (IVDR). A vehicle with advanced driver-assistance systems falls under the vehicle type-approval framework. Compliance with these regulations is a prerequisite for market access, regardless of the AI capabilities.

The New Layer: The AI Act and its Scope

The AI Act introduces a horizontal regulation that applies to all sectors. It classifies AI systems based on their potential risk to health, safety, and fundamental rights. This risk-based approach dictates the level of scrutiny and the obligations for providers, deployers, and other actors in the value chain. The Act defines an AI system in Article 3(1) as:

“…a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.”

This definition is intentionally broad, aiming to capture a wide range of technologies, from simple machine learning algorithms to complex generative models. Critically, the AI Act applies to providers placing AI systems on the EU market or putting them into service, regardless of their location (EU or third country), and to deployers of such systems within the EU. It also has extraterritorial reach concerning the output of the AI system used in the EU.

Classifying Risk: The Regulatory Tiers

The AI Act establishes four tiers of risk. The obligations for each tier are distinct and escalate with the potential for harm.

Unacceptable Risk: Prohibited Practices

At the top of the pyramid are AI systems considered a threat to people. These are prohibited. The list includes:

  • Subliminal techniques designed to distort behavior in a way that causes physical or psychological harm.
  • Exploitation of vulnerabilities of a specific group.
  • Social scoring by public authorities.
  • Real-time remote biometric identification in publicly accessible spaces for law enforcement purposes (with narrow, strictly regulated exceptions).

While most AI-enabled products like robots or medical software will not fall into this category, developers must be vigilant that their systems do not incorporate features that could be interpreted as manipulative or discriminatory. For example, a robot designed for elder care must not use persuasive techniques that exploit the cognitive decline of its user.

High-Risk AI Systems: The Core Obligations

The majority of regulatory attention is focused on High-Risk AI Systems. These are defined in two ways: (1) AI systems that are safety components of products covered by existing EU legislation (like the Machinery Regulation or MDR), or (2) AI systems that fall into specific high-risk use cases listed in Annex III of the Act (e.g., critical infrastructure management, employment selection, biometrics).

This is where the dual-layer architecture is most apparent. An AI system that is a safety component of a machine (e.g., a vision system that detects human presence to stop a robot) is automatically high-risk under the AI Act. The manufacturer of the machine must ensure that the AI component meets the AI Act’s requirements before they can even begin the conformity assessment for the machine itself.

The obligations for high-risk AI systems are extensive and technical:

Requirements for High-Risk AI

  • Risk Management System: A continuous iterative process running throughout the entire lifecycle of the AI system to identify, estimate, and mitigate risks.
  • Data and Data Governance: Training, validation, and testing data sets must be relevant, representative, free of errors, and complete. This is a significant challenge for many developers, especially in biotech where data is scarce and sensitive.
  • Technical Documentation: A detailed file demonstrating compliance with all requirements, which must be presented to authorities upon request. This goes far beyond the technical files required under the NLF.
  • Record-keeping: Automatic logging of events (logs) throughout the system’s operation to ensure traceability and post-market monitoring.
  • Transparency and Provision of Information to Deployers: The system must be designed to be sufficiently transparent to enable deployers to understand its capabilities and limitations. Instructions for use must be clear.
  • Human Oversight: The system must be designed to allow for effective human oversight, with measures to prevent or minimize the risk of incorrect outputs or misuse.
  • Accuracy, Robustness, and Cybersecurity: The system must achieve levels of accuracy and robustness, and be resilient against attempts to alter its use or output.

Compliance with these requirements is not a one-time event. It requires a mature Quality Management System (QMS) and a culture of “compliance by design.”

Conformity Assessment for High-Risk AI

Before placing a high-risk AI system on the market, the provider must follow a conformity assessment procedure. For AI systems that are safety components of a product (e.g., a robot), the provider can choose between two routes:

  1. Internal Control: The provider carries out the assessment themselves and draws up the technical documentation. This is only possible if the AI system is not listed in Annex III or if the provider has applied a harmonised standard.
  2. Third-Party Assessment: In some cases, the provider must involve a Notified Body. This is required if no harmonised standards exist, or if the provider has not applied them, or if the AI system is intended to be used in critical infrastructure.

Once the conformity assessment is complete, the provider issues an EU declaration of conformity and affixes the CE marking to the product. For software, this may mean making it available via an app store or download portal with the necessary documentation.

Focus: AI in Robotics

Robotics is a prime example of the interplay between the NLF and the AI Act. Industrial robots have long been regulated under the Machinery Directive (now the Machinery Regulation, (EU) 2023/1230). This regulation addresses physical risks such as crushing, shearing, or entanglement. It requires risk assessment, safety functions, and specific design principles.

Modern robotics, however, is increasingly defined by AI. A collaborative robot (cobot) uses AI to perceive its environment and adapt its movements to work safely alongside humans. An autonomous mobile robot (AMR) in a warehouse uses AI for navigation and obstacle avoidance. These AI capabilities are not just add-ons; they are integral to the robot’s function and safety.

The Machinery Regulation and AI

The new Machinery Regulation explicitly integrates the concept of AI. It defines “partially trained” machinery and places obligations on the manufacturer to ensure that the machinery can be operated safely throughout its lifecycle, including when its AI components learn and adapt. The Regulation requires that safety functions be designed to be robust against disturbances and that the machinery is designed to be protected against corruption from data sources.

Crucially, if an AI system is a safety component of a machine (e.g., an AI-based vision system that stops a robot if a human enters a restricted zone), that AI system is considered high-risk under the AI Act. The manufacturer of the robot must therefore ensure that this vision system complies with the AI Act’s requirements for high-risk systems. This means the robot manufacturer must either develop the AI system in-house with full compliance or procure it from a supplier who can provide evidence of compliance.

Practical Compliance for Robotics Manufacturers

For a robotics company, the path to compliance involves a dual-track documentation process. The technical file for the Machinery Regulation will contain information on mechanical safety, electrical safety, and risk assessment. The technical file for the AI Act will contain:

  • The architecture and algorithms of the AI system.
  • The datasets used for training, validation, and testing, with evidence of their quality and representativeness.
  • The specifications for the logging capabilities (what is logged, how it is stored, how it is accessed).
  • A detailed description of the human-machine interface and the measures for human oversight.
  • Results of robustness and accuracy testing, including tests against adversarial attacks.

European countries have different historical strengths in robotics. Germany, with its strong automotive and industrial base, has deep expertise in industrial robots. Sweden is a leader in collaborative robotics. This national specialization influences the approach of market surveillance authorities. A German authority might focus heavily on the integration of the AI system into a complex production line, while a Swedish authority might focus more on the safety of human-robot interaction in dynamic environments. Manufacturers must be prepared for this nuanced scrutiny.

Focus: AI in Medical Software (SaMD)

Software as a Medical Device (SaMD) is another area where the AI Act’s impact is profound. The EU’s Medical Device Regulation (MDR) and In Vitro Diagnostic Regulation (IVDR) already impose stringent requirements for safety and performance. Many SaMDs, particularly those used for diagnosis, prognosis, or therapy recommendation, are classified as high-risk (Class IIa, IIb, or III).

The MDR/IVDR requires clinical evaluation, post-market surveillance, and a risk management system. However, they were not designed with the specific characteristics of AI in mind, such as continuous learning or data dependency. The AI Act fills this gap.

AI Act and Medical Device Conformity

An AI-based diagnostic tool is a high-risk AI system under the AI Act (as it is a safety component of a medical device). It is also a high-risk medical device under the MDR/IVDR. The provider must:

  1. Ensure the AI system meets the AI Act’s requirements (data governance, transparency, etc.).
  2. Ensure the medical device as a whole meets the MDR/IVDR requirements (clinical evidence, QMS, etc.).
  3. Undergo two separate (but linked) conformity assessments.

There is a potential for synergy. The technical documentation for the AI Act can be structured to feed into the MDR technical file. For example, the AI Act’s requirement for a risk management system can be integrated with the MDR’s requirement for a risk management file. The AI Act’s requirement for data governance is particularly relevant to the MDR’s requirement for clinical evidence, which must be based on “sufficient clinical data.”

Continuous Learning and “Predetermined Change Control Plans”

A key challenge for medical AI is continuous learning. An algorithm that improves over time based on new data may change its performance characteristics, potentially invalidating its original clinical evaluation. The AI Act addresses this by requiring providers to define a Predetermined Change Control Plan (PCCP). This plan outlines the scope of potential changes to the AI system (e.g., retraining on new data, algorithm updates) and the process for managing them, including risk assessment and updates to documentation. This is a novel concept that requires medical device companies to think about their product lifecycle in a new way, moving from a static “version” model to a dynamic “continuously updated” model.

Furthermore, the AI Act’s transparency requirements mean that healthcare professionals using the AI tool must be informed of its capabilities and limitations in clear terms. They must understand the level of confidence they can place in the AI’s output. This has implications for user interface design and training materials.

Software, General-Purpose AI, and the Cloud

Not all AI-enabled products are physical devices. Many are pure software, distributed via the cloud or app stores. The regulation of these systems presents unique challenges regarding jurisdiction and enforcement.

For software that is a high-risk AI system (e.g., a recruitment tool that screens CVs), the provider is the entity that develops the software and places it on the market. If the provider is outside the EU, they must appoint an authorized representative within the EU. The software must be CE marked, and the provider must draw up the necessary technical documentation. The “product” is the software itself, and its “safety” relates to the harm its decisions can cause (e.g., discriminatory hiring, denial of opportunities).

General-Purpose AI (GPAI) Models

The AI Act introduces a specific regime for General-Purpose AI (GPAI) models, such as large language models. These are models that can be adapted to a wide range of tasks. The obligations for GPAI providers are distinct from those for high-risk AI systems:

  • All GPAI providers must meet baseline transparency requirements (e.g., providing a summary of the training data, ensuring compliance with EU copyright law).
  • GPAI models that present a systemic risk are subject to more stringent obligations, including conducting model evaluations, assessing and mitigating systemic risks, and reporting serious incidents to the European AI Office.

This is a horizontal layer that applies regardless of the final application. A company using a GPAI model as a component in a high-risk medical device remains responsible for the high-risk system’s compliance, but the provider of the underlying GPAI model has its own, separate set of obligations. This creates a complex value chain of responsibility.

Market Surveillance and Enforcement

Enforcement of both the AI Act and product safety legislation is the responsibility of national authorities. Each Member State must designate one or more market surveillance authorities. For the AI Act, these are often the same authorities that oversee data protection (like France’s CNIL) or product safety (like Germany’s BAM or the UK’s HSE in pre-Brexit context).

These authorities have significant powers. They can request documentation, conduct audits, and inspect systems. They can require a provider to take corrective actions, withdraw or recall a product, or impose temporary bans. Fines for non-compliance with the AI Act are severe, mirroring those of the GDPR: up to €35 million or 7% of total worldwide annual turnover, whichever is higher, for prohibited AI practices.

For AI-enabled products, enforcement will be a collaborative effort. A market surveillance authority for machinery might identify a safety issue with a robot, but if the issue stems from the AI system, they will need to collaborate with the authority responsible for the AI Act. This requires a high degree of coordination and technical expertise at the national level, which is still developing.

The Role of Notified Bodies and Harmonised Standards

Harmonised standards are crucial for practical compliance. These are technical standards developed by European standardisation bodies (CEN, CENEC) that provide a presumption of conformity with the legal requirements. For example, if a manufacturer of an AI-enabled medical device complies with a harmonised standard on “Robustness of AI systems in medical applications,” they are presumed to meet the corresponding requirements of the AI Act and MDR.

Currently, specific harmonised standards for the AI Act are still under development. In their absence, manufacturers must rely on general standards (e.g., for risk management, quality management) and state-of-the-art technical guidelines. This creates a period of uncertainty where best practices are evolving. Companies are well-advised to engage with standardisation activities and follow guidance from European and international bodies like CEN-CENELEC and ISO/IEC JTC 1/SC 42.

Notified Bodies, which are third-party organizations designated by Member States to assess conformity, are also ramping up their capacity and expertise in AI. For high-risk AI systems that require third-party assessment, choosing a Notified Body with proven expertise in both the specific product domain (e.g., medical devices) and AI is a critical strategic decision.

Table of Contents
Go to Top