AI-Enabled Products: The Compliance Stack Explained
Bringing an AI-enabled product to the European market requires orchestrating a compliance stack that extends well beyond the now-familiar text of the AI Act. Engineers, product managers, and legal counsel must align obligations from horizontal product safety legislation, vertical sectoral rules, cybersecurity requirements, and data protection law with the new, risk-based governance model introduced by Regulation (EU) 2024/1689 (the AI Act). The result is a layered regulatory architecture where the AI Act acts as a horizontal overlay, defining what constitutes an AI system and imposing obligations on providers, deployers, and other actors, while existing and updated product safety frameworks determine the conformity assessment pathways, market surveillance mechanisms, and the use of notified bodies. Understanding how these regimes interact in practice—particularly for high-risk AI systems embedded in products subject to the New Legislative Framework (NLF)—is essential for ensuring lawful placement on the market and maintaining long-term compliance.
At the core of this stack sits the AI Act’s system of risk classification and role-based obligations. The Act applies to providers placing AI systems on the EU market or putting them into service, as well as to deployers using such systems within the Union, with additional rules for importers and distributors. It covers a broad range of AI techniques, including machine learning approaches and logic- and knowledge-based methods, and it applies regardless of whether the AI is embedded in a physical product or delivered as a standalone service. The Act introduces prohibited practices, sets specific requirements for high-risk AI systems, and provides lighter transparency obligations for certain limited-risk uses. Crucially, it does not replace product safety legislation; rather, it integrates into it. When a high-risk AI system is a component of a product that falls under the NLF (for example medical devices, machinery, or lifts), the AI Act’s requirements are addressed through the existing conformity assessment and CE marking procedures, often with the involvement of a notified body.
The AI Act’s risk architecture and its practical implications
The AI Act organizes risk into four tiers: unacceptable risk (prohibited practices), high risk (subject to extensive obligations), limited risk (subject to transparency obligations), and minimal or no risk (no new obligations). The high-risk category is pivotal for product compliance because it triggers obligations that intersect directly with product safety law. A system is high-risk if it is intended as a safety component of a product covered by Union harmonization legislation (such as the Machinery Regulation or the Medical Devices Regulation), or if it is itself a product subject to third-party conformity assessment under that legislation. Additionally, the AI Act lists specific use cases as high-risk, such as critical infrastructure management, employment selection, educational scoring, and biometric categorization. For AI systems embedded in products, the practical effect is that the provider must meet the AI Act’s requirements—risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity—while following the conformity assessment procedures defined by the relevant product legislation.
These obligations are not abstract. A risk management system must be applied across the AI system’s entire lifecycle, with regular review and risk control measures that are appropriate to the nature of the risks. Data governance must ensure that training, validation, and testing datasets are relevant, representative, free of errors, and complete, with appropriate data handling practices to address biases and privacy risks. Technical documentation must be prepared before the system is placed on the market or put into service, covering the system’s design, development, and intended use, and demonstrating compliance with the requirements. Providers must establish post-market surveillance systems to monitor the performance of the AI system once it is on the market, and they must implement a quality management system that addresses the AI Act’s requirements, often in conjunction with ISO 9001 or sector-specific QMS standards. In many cases, the AI Act requires that the provider appoint a person responsible for compliance (often called a “compliance officer” or “AI compliance lead”) who has the requisite expertise and authority.
For high-risk AI systems that are safety components of products under the NLF, the provider must follow the conformity assessment procedure set out in the relevant product legislation. This typically involves either a self-assessment (for lower-risk products) or third-party assessment by a notified body. The AI Act clarifies that the conformity assessment under the NLF can be used to demonstrate compliance with the AI Act’s requirements for the AI component. In practice, this means the technical documentation for the product will include documentation for the AI component, and the notified body will assess whether the AI system meets the AI Act’s requirements as part of the overall product conformity assessment. The provider must then draw up an EU declaration of conformity and affix the CE marking to the product, indicating that it complies with all applicable EU legislation. It is important to note that the AI Act does not introduce a separate “AI CE marking”; rather, the CE marking under the NLF signifies compliance with all applicable legislation, including the AI Act for high-risk AI components.
Integration with the New Legislative Framework and notified bodies
The NLF provides a harmonized approach to product safety across a wide range of product categories. It sets out general obligations for economic operators, the rules on conformity assessment, the duties of notified bodies, market surveillance, and the safeguard mechanism. The AI Act builds on this framework by specifying how the AI requirements are to be addressed within the NLF procedures. For example, when a high-risk AI system is a safety component of machinery, the Machinery Regulation applies. The provider must carry out a risk assessment of the machinery, and if the machinery incorporates a high-risk AI component, the AI Act’s requirements for that component must be addressed in the technical documentation and assessed by the notified body where required. Similarly, for medical devices that incorporate AI, the Medical Devices Regulation (MDR) and In Vitro Diagnostic Medical Devices Regulation (IVDR) require that the device’s software, including AI, be subject to conformity assessment, often by a notified body, with the AI Act’s requirements integrated into that process.
Notified bodies play a critical role in this integrated approach. They assess whether products meet the essential requirements of the relevant NLF legislation, and they will also verify that the AI component complies with the AI Act’s requirements for high-risk systems. This includes reviewing the provider’s risk management system, data governance practices, technical documentation, and post-market surveillance plans. Notified bodies are designated by national authorities and operate under strict oversight, including regular audits and coordination at the EU level through the New Approach Coordination Group and the AI Act’s governance structures. The involvement of a notified body can significantly affect time-to-market, as the assessment process may involve iterative reviews, testing, and documentation requests. Providers should engage early with potential notified bodies to understand their expectations and the scope of assessment, particularly for novel AI techniques where standards may still be evolving.
One practical challenge is the alignment of timelines and updates. The AI Act’s obligations for high-risk AI systems apply from the date of application, which for most provisions is 2 August 2026. However, for high-risk AI systems that are already on the market or in service before that date, there is a transitional period to bring them into compliance, provided they undergo significant changes in their design or intended purpose after that date. Meanwhile, product legislation may be amended to reflect the AI Act’s integration, and standards development organizations are working to produce harmonized standards that can be used to presume conformity with the AI Act’s requirements. Providers should monitor the publication of harmonized standards in the Official Journal of the EU, as these standards will provide concrete technical specifications for compliance.
Data protection and privacy: The GDPR overlay
AI systems process personal data, and the GDPR applies to such processing regardless of the AI Act’s classification. The AI Act and GDPR are complementary: the AI Act sets requirements for the design and operation of the AI system, while the GDPR governs the lawfulness, fairness, transparency, and security of personal data processing. For high-risk AI systems used in contexts involving personal data—such as recruitment, credit scoring, or biometric identification—providers and deployers must ensure that data processing complies with GDPR principles, including data minimization, purpose limitation, and accuracy. Data protection impact assessments (DPIAs) may be required, and the appointment of a data protection officer may be necessary. The AI Act’s data governance requirements align with GDPR expectations, particularly regarding the quality of training data and the mitigation of bias. In practice, compliance teams should coordinate AI risk assessments with DPIAs, ensuring that technical measures (e.g., anonymization, pseudonymization, differential privacy) are documented and validated.
Deployers of high-risk AI systems that involve personal data must also ensure that the system is used in accordance with the provider’s instructions and that any human oversight is informed by the system’s limitations. The AI Act requires deployers to use the system as intended and to monitor it for risks, which dovetails with GDPR’s accountability obligations. If a deployer uses an AI system to make decisions that have legal or similarly significant effects, they must also consider the rights of data subjects under GDPR, including the right to obtain an explanation of the decision and the right to contest it. The interplay between these regimes means that transparency to individuals is not only an AI Act requirement but also a GDPR imperative, and the technical documentation should include information that can be used to fulfill both sets of obligations.
Cybersecurity obligations across the stack
Cybersecurity is a cross-cutting requirement embedded in multiple instruments. The AI Act explicitly requires high-risk AI systems to be resilient against attempts by third parties to alter their use, behavior, or performance, including through data poisoning and adversarial attacks. Providers must implement state-of-the-art cybersecurity measures proportionate to the risks, and they must address vulnerabilities throughout the lifecycle. This requirement is reinforced by the Cyber Resilience Act (CRA), which imposes cybersecurity obligations on products with digital elements, including the requirement to design for security, to provide security updates, and to report vulnerabilities and incidents. For AI-enabled products, the CRA and AI Act requirements should be addressed together, with security-by-design principles applied to the AI components and the overall product.
Where AI systems are used in critical infrastructure or other sensitive environments, additional sectoral cybersecurity rules may apply, such as the NIS2 Directive for network and information systems security. NIS2 sets risk management measures and reporting obligations for essential and important entities, and it may require that AI systems used in these contexts meet enhanced security standards. The AI Act’s transparency and documentation requirements can help demonstrate compliance with NIS2, particularly regarding the identification of risks and the implementation of mitigation measures. In practice, providers should integrate cybersecurity testing into their development processes, including adversarial testing for AI models, and document the results in the technical file. The use of harmonized standards for cybersecurity, such as those referenced under the CRA or the EU cybersecurity certification framework under the Cybersecurity Act, can provide a clear path to compliance.
Sectoral regimes: Medical devices, machinery, and beyond
Several sectoral regimes are particularly relevant for AI-enabled products, and they often require third-party conformity assessment. The MDR and IVDR regulate medical devices and in vitro diagnostic devices, respectively. AI used for diagnosis, prognosis, or treatment support typically falls within these regulations, and many such devices are classified as high-risk (Class IIa, IIb, or III under MDR; Class B, C, or D under IVDR). The conformity assessment procedures involve notified bodies, and the technical documentation must demonstrate safety, performance, and clinical evidence. The AI Act’s requirements for high-risk AI systems are integrated into this process, meaning that providers must address data governance, risk management, and human oversight in a manner consistent with MDR/IVDR expectations. For example, clinical evaluations must consider the AI’s learning capabilities and the potential for drift, and post-market surveillance must include monitoring for performance changes over time.
The Machinery Regulation (EU) 2023/1230 applies to machinery that incorporates AI as a safety component. The regulation requires a risk assessment of the machinery and, for certain high-risk machinery, involvement of a notified body. If the machinery’s safety functions are implemented using AI, the provider must demonstrate that the AI system meets the AI Act’s requirements for robustness, accuracy, and human oversight, and that the machinery as a whole remains safe. This may involve designing fallback strategies, ensuring manual override capabilities, and validating the AI’s behavior under foreseeable fault conditions. The Machinery Regulation also addresses software updates, which are common for AI systems; providers must determine whether an update constitutes a significant change requiring re-assessment.
Other sectoral regimes may apply depending on the product’s function. For example, AI-enabled radio equipment falls under the Radio Equipment Directive, which requires compliance with essential requirements for safety and electromagnetic compatibility, and may involve cybersecurity aspects. AI used in vehicles or vehicle systems may be subject to type-approval legislation and sector-specific safety rules. AI-enabled toys must comply with the Toy Safety Directive, which includes requirements for chemical, physical, and mechanical safety, and may involve assessment of software risks if the toy incorporates AI features. In each case, the AI Act’s high-risk obligations must be integrated into the sectoral conformity assessment process, and the technical documentation must reflect both sets of requirements.
Biometric and identity systems: Special considerations
AI systems that perform biometric identification, categorization, or emotion recognition are subject to specific prohibitions and restrictions under the AI Act. The use of real-time remote biometric identification in publicly accessible spaces for law enforcement purposes is generally prohibited, except in narrowly defined situations subject to judicial authorization and strict safeguards. Other biometric systems, such as emotion recognition or categorization based on sensitive attributes, are either prohibited or classified as high-risk depending on the context. When such systems are embedded in products, the sectoral legislation may also impose additional constraints. For example, products marketed to consumers that include emotion recognition may be subject to consumer protection rules and product safety requirements that address psychological harm.
Deployers of biometric systems must pay particular attention to data protection law, as biometric data is special-category data under GDPR. Processing such data requires a lawful basis and often explicit consent or a specific legal authorization. The AI Act’s transparency and human oversight requirements complement GDPR’s safeguards, and providers should design systems with privacy-by-design principles, including the ability to limit data retention, ensure secure storage, and provide clear information to individuals. In practice, compliance for biometric systems is particularly complex, and early engagement with regulators and notified bodies is advisable.
General-purpose AI models and downstream product integration
The AI Act introduces specific rules for general-purpose AI (GPAI) models, including those with systemic risk. Providers of GPAI models must meet obligations related to training data transparency, documentation, and risk evaluation, and they must cooperate with the European AI Office. Downstream providers who integrate GPAI models into high-risk AI systems or products must ensure that the integrated system meets the AI Act’s requirements, even if the GPAI model itself is compliant. This includes verifying that the model’s capabilities and limitations are appropriate for the intended use, that the integration does not introduce new risks, and that the overall system is robust and accurate. Documentation from the GPAI provider should be leveraged, but downstream providers cannot rely solely on that documentation; they must perform their own conformity assessment for the integrated system.
For products that incorporate third-party AI components, contractual arrangements should address compliance responsibilities. The provider of the final product must ensure that the AI component meets the AI Act’s requirements and must have access to necessary documentation and support from the component provider. Importers and distributors must verify that the product bears the appropriate conformity markings and that the required documentation is available. If a product is placed on the market by a distributor under its own brand, it may be considered the provider and must assume full compliance responsibilities.
Market surveillance, enforcement, and the safeguard clause
Market surveillance authorities monitor compliance with both product safety legislation and the AI Act. They have powers to request documentation, conduct audits, and require corrective actions. Under the AI Act, authorities can restrict or withdraw AI systems from the market if they present a risk, and they can impose penalties for non-compliance. The NLF provides similar powers for products, and the two regimes are coordinated to ensure consistent enforcement. The safeguard clause in the NLF allows a Member State to take provisional measures if it considers that a product bearing the CE marking presents a risk, and the AI Act’s governance structures support coordinated action across Member States.
Incident reporting is another area of convergence. The AI Act requires reporting of serious incidents involving high-risk AI systems, while the CRA introduces vulnerability and incident reporting for products with digital elements. Sectoral regulations may have their own reporting obligations (for example, adverse event reporting under MDR). Providers should establish internal processes to capture, classify, and report incidents in accordance with all applicable regimes, ensuring that reports are timely, complete, and consistent. This requires clear internal definitions, escalation paths, and coordination between product safety, cybersecurity, and AI compliance teams.
Standards and the presumption of conformity
Harmonized standards play a crucial role in demonstrating compliance. When the European Commission references a harmonized standard in the Official Journal of the EU, providers who apply that standard benefit from a presumption of conformity with the corresponding legal requirements. For the AI Act, harmonized standards are being developed to address risk management, data governance, technical documentation, testing, and cybersecurity, among other topics. For product legislation, harmonized standards already exist for many product categories, and they are being updated to reflect digital and AI considerations. Providers should monitor the publication of these standards and consider early adoption to streamline their conformity assessments. However, it is important to recognize that standards are voluntary; providers can use other technical solutions to demonstrate compliance, but they must provide detailed justification.
In the absence of harmonized standards, common specifications and European technical assessments may be used. The AI Act allows the Commission to adopt common specifications where standards are not available or are insufficient. Providers should track Commission communications and consult with notified bodies to understand acceptable technical approaches. For novel AI techniques, it may be necessary to engage in pre-submission discussions with regulators or notified bodies to align on evidence generation and testing methodologies.
Practical compliance steps for AI-enabled products
Establishing a robust compliance process begins with a clear understanding of the product’s intended purpose and its regulatory classification. The provider should map the product’s functions to the AI Act’s risk categories and identify whether it is a high-risk AI system, either as a standalone system or as a safety component of a product under the NLF. This mapping determines the applicable obligations and the conformity assessment pathway. If the product falls under sectoral legislation, the provider must identify the relevant essential requirements and the role of the notified body. It is advisable to create a regulatory matrix that lists each applicable instrument, the obligations it imposes, and the evidence required to demonstrate compliance.
Next, the provider should establish a risk management system that addresses both AI-specific risks (such as data drift, adversarial attacks, and model bias) and product safety risks (such as mechanical failure or incorrect use). The risk management process should be documented and integrated with the product development lifecycle, with clear gates for risk review and mitigation. Data governance should be formalized, covering
