Standards as Compliance Infrastructure: ISO, IEC, CEN/CENELEC
Engineering compliance into complex, AI-enabled systems is rarely a matter of simply reading a regulation and writing code. For professionals working with robotics, medical devices, industrial automation, or public-sector AI deployments across Europe, the practical path to meeting legal obligations runs through a dense ecosystem of standards. These technical documents are where high-level principles are translated into testable criteria, auditable processes, and interoperable design decisions. Understanding how this ecosystem functions—how harmonized standards create safe harbors under the EU legal framework, how voluntary standards inform procurement and risk management, and how conformity assessment ties everything together—is essential for any organization seeking to place trustworthy products and services on the European market.
At the center of this ecosystem are three interdependent layers: the EU regulatory framework (the New Legislative Framework, or NLF), the standards development organizations (SDOs) that produce the technical content (ISO, IEC, CEN, CENELEC), and the conformity assessment and market surveillance mechanisms that enforce the rules. The relationship is symbiotic. Regulations set the legal requirements; standards define the means by which those requirements can be met; notified bodies and authorities verify that the means have been applied correctly. For AI-enabled systems, this interplay is especially important because AI is rarely a standalone product. It is embedded in devices, platforms, and services that may fall under multiple regimes, from machinery and medical devices to cybersecurity and data protection.
Regulatory Framework and the Role of Standards
The European Union’s approach to product regulation is anchored in the New Legislative Framework (NLF), a package of legislation adopted in 2008 and reinforced in 2019 that harmonizes rules for placing goods on the EU market. The NLF includes the Decision on market surveillance (768/2008/EC), the General Product Safety Directive (2001/95/EC), and the Regulation on accreditation and market surveillance (765/2008/EC). It establishes common principles for product safety, conformity assessment, market surveillance, and the free movement of goods. Within this framework, harmonized standards play a pivotal role: they provide presumption of conformity with relevant legal requirements when a manufacturer applies them voluntarily.
When a harmonized standard is listed in the Official Journal of the European Union (OJEU) for a specific directive or regulation, compliance with that standard gives a presumption of conformity with the corresponding essential requirements of that legislation.
This mechanism is powerful. It does not make standards mandatory; manufacturers remain free to choose any technical solution that meets the legal requirements. However, following harmonized standards typically reduces regulatory friction, speeds up market access, and simplifies communication with authorities and notified bodies. For AI-enabled systems, this means that relevant standards—whether they address functional safety, cybersecurity, risk management, or data quality—can serve as a practical compliance infrastructure, even if the AI Act itself is not yet fully applied.
It is important to distinguish between EU-level harmonized standards and national standards. Harmonized standards are developed at European level by CEN and CENELEC (and often aligned with international standards from ISO and IEC) and are then referenced in the OJEU. National standards exist alongside them but do not confer presumption of conformity under EU law. In practice, many national standards are identical or closely aligned to harmonized standards, but the legal effect is tied to the EU-level listing. For professionals, the key is to track the OJEU references for the relevant directives and regulations, and to verify that the versions of standards cited in contracts and technical files are the harmonized ones.
How Harmonized Standards Work in Practice
Consider a manufacturer integrating a machine learning module into a robotic arm intended for industrial use. The product may fall under the Machinery Regulation (2023/1230), the Radio Equipment Directive (2014/53/EU), and potentially the EMC and Low Voltage Directives. The manufacturer must meet essential health and safety requirements, including those related to control systems and risk reduction. Harmonized standards such as EN ISO 12100 (risk assessment), EN ISO 13849 (safety-related control systems), and EN IEC 61508 (functional safety) provide detailed methods and test criteria to demonstrate compliance. If the AI component influences safety functions, the manufacturer may also need to consider standards addressing software lifecycle processes, verification and validation, and cybersecurity.
Applying these standards is not merely a box-ticking exercise. They require a disciplined engineering approach: hazard analysis, specification of safety requirements, design of safety-related control systems, verification that the design meets the requirements, and validation that the system performs as intended under realistic conditions. For AI-enabled systems, this often raises novel challenges. Standards that assume deterministic logic may need to be interpreted in light of probabilistic or data-driven components. Here, the combination of standards becomes crucial: functional safety standards provide the scaffolding for safety integrity, while AI-specific standards (under development or newly published) address aspects like data governance, robustness, and explainability.
Conformity Assessment and Notified Bodies
Conformity assessment is the process by which a manufacturer demonstrates that a product meets the applicable legal requirements. Depending on the product and its risk profile, this may be a self-declaration (internal production control) or require involvement of a third-party notified body. The NLF defines modules (A, A2, B, C, D, E, F, G, H) for conformity assessment, ranging from internal production control to full quality assurance and type examination. For high-risk AI systems under the AI Act, conformity assessment will often involve a notified body, particularly for systems that are high-risk and fall under other sectoral legislation (e.g., medical devices, machinery).
Notified bodies assess whether the manufacturer’s technical documentation and quality management system comply with the applicable legislation and standards. They do not develop standards, but they interpret them in the context of specific products. A robust technical file will therefore reference the standards used, explain any deviations, and provide evidence (test reports, analysis, validation results) that the requirements are met. For AI-enabled systems, this evidence may include datasets and data governance documentation, model evaluation metrics, robustness testing, human oversight measures, and logging capabilities for post-market monitoring.
ISO, IEC, CEN, and CENELEC: The Standards Landscape
Understanding the roles of the main standards bodies is essential for navigating the compliance landscape:
- ISO (International Organization for Standardization): Develops international standards across a wide range of domains, including quality management, risk management, cybersecurity, and AI. ISO standards are often adopted by CEN and CENELEC as European standards.
- IEC (International Electrotechnical Commission): Focuses on electrical and electronic technologies, including functional safety, EMC, and cybersecurity for industrial systems. Many IEC standards are harmonized under EU directives for electrical equipment and machinery.
- CEN (European Committee for Standardization): Develops European standards for non-electrical products and services, including machinery safety, pressure equipment, and construction products. CEN adopts many ISO standards as EN standards.
- CENELEC (European Committee for Electrotechnical Standardization): Develops European standards for electrical technologies, often aligning with IEC standards. CENELEC standards are frequently harmonized under EU directives.
For AI-enabled systems, the relevant standards may come from multiple sources. For example, a medical device with an AI-based diagnostic component might need to satisfy:
- ISO 13485 (quality management for medical devices),
- ISO 14971 (risk management for medical devices),
- IEC 62304 (software lifecycle processes),
- ISO/IEC 23894 (AI risk management),
- ISO/IEC 42001 (AI management systems),
- EN IEC 60601 series (safety and essential performance of medical electrical equipment),
- EN IEC 82304-1 (health software),
- Cybersecurity standards such as IEC 62443 or EN 303 645 for consumer IoT.
The interplay between these standards is not always straightforward. Some standards are horizontal (applicable across many sectors), while others are vertical (sector-specific). Some are normative (mandatory requirements), while others are informative (guidance). In the EU context, the legal effect comes from the OJEU listing for the relevant directive or regulation. Manufacturers must therefore maintain a “standards map” that links legal obligations to specific standards and versions, and that documents any gaps or alternative solutions.
Voluntary Standards and Market Expectations
Not all standards are harmonized. Many standards are voluntary but widely adopted because they reflect best practice or are required by procurement specifications. For example, public sector procurement for AI systems may require compliance with ISO/IEC 27001 (information security management), ISO/IEC 27701 (privacy information management), or sector-specific standards for data protection and cybersecurity. Similarly, industry consortia and certification schemes (e.g., TISAX for automotive, SOC 2 for cloud services) often reference ISO and IEC standards as baseline criteria.
Voluntary standards can shape market access even without legal presumption of conformity. If a tender specifies that bidders must demonstrate compliance with a particular standard, the bidder must meet that requirement to win the contract. For AI-enabled systems, this is common in public sector deployments where authorities seek assurance on data governance, robustness, and human oversight. In such contexts, voluntary standards become de facto compliance infrastructure, bridging legal expectations and technical implementation.
AI-Specific Standards and the AI Act
The EU AI Act (Regulation (EU) 2024/1689) establishes a risk-based framework for AI systems, with obligations for providers, deployers, importers, and distributors. It sets out essential requirements for high-risk AI systems, including risk management, data governance, technical documentation, record-keeping, transparency, human oversight, and accuracy, robustness, and cybersecurity. The Act explicitly anticipates the role of standards in supporting these requirements. While the European Commission will request CEN-CENELEC to develop harmonized standards, the process takes time. Until such standards are available and listed in the OJEU, providers can rely on voluntary standards to demonstrate compliance.
Several standards are particularly relevant for AI-enabled systems:
- ISO/IEC 23894:2023 (AI risk management): Provides guidance on identifying, analyzing, and treating AI-related risks, aligning with ISO 31000 principles but tailored to AI-specific hazards such as data drift, adversarial attacks, and bias.
- ISO/IEC 42001:2023 (AI management systems): Specifies requirements for establishing, implementing, maintaining, and continually improving an AI management system, analogous to ISO 9001 or ISO 27001 but focused on AI governance.
- ISO/IEC 23053:2022 (framework for AI systems using ML): Defines terminology and a reference architecture for AI systems using machine learning, aiding interoperability and common understanding.
- ISO/IEC TR 24027:2021 (bias in AI systems): Provides guidance on evaluating bias throughout the AI lifecycle, from data collection to model deployment.
- ISO/IEC TR 24028:2020 (trustworthiness): Discusses aspects of AI trustworthiness, including robustness, accuracy, privacy, and transparency.
- ISO/IEC 25000 series (SQuaRE): Addresses software product quality, which can be adapted to evaluate AI model quality and performance.
- ISO/IEC 29119 (software testing): Provides a framework for testing that can be applied to AI systems, though it may need augmentation for data-centric testing.
- IEC 61508 (functional safety of E/E/PE systems): Often used as a basis for safety-related software and hardware, including components that may contain AI.
- IEC 62443 (industrial communication networks cybersecurity): Relevant for AI systems integrated into industrial control systems.
- ISO/IEC 27001 and ISO/IEC 27701: For information security and privacy management, increasingly referenced in AI procurement and certification.
For medical AI, additional standards apply, such as ISO 14971 for risk management, IEC 62304 for software lifecycle, and the EN IEC 60601 series for safety and performance. For machinery with AI, EN ISO 12100 and EN ISO 13849 are foundational. For consumer products with AI features, EN 303 645 (cybersecurity for consumer IoT) and the EU Cybersecurity Act’s ETSI standards may be relevant.
The AI Act also references the concept of “state of the art.” Standards are a primary means of demonstrating alignment with the state of the art. However, the state of the art evolves rapidly. Providers must monitor updates to standards and adjust their technical documentation and risk management processes accordingly. A static compliance posture is unlikely to remain sufficient.
From Principles to Practice: Building a Standards-Based Compliance Program
For organizations developing or deploying AI-enabled systems in Europe, a practical compliance program integrates legal analysis, standards mapping, engineering practices, and governance processes. The following steps illustrate how this can be structured:
1. Regulatory Scoping and Applicability
Identify the legal regimes that apply to the product or service. This includes determining whether the AI system is high-risk under the AI Act, whether it falls under sectoral legislation (e.g., machinery, medical devices, radio equipment, drones), and whether public procurement or sector-specific certifications are relevant. Document the scope and the essential requirements that apply.
2. Standards Mapping
Map each applicable legal requirement to relevant standards. For EU legislation, check the OJEU for harmonized standards. For requirements not covered by harmonized standards, identify voluntary standards that can demonstrate compliance. Maintain a version-controlled matrix linking requirements to standards and to internal engineering practices.
3. Technical Documentation and Evidence
Prepare technical documentation that references the standards used and provides evidence of compliance. For AI systems, this includes:
- System description and intended purpose,
- Risk management file (including AI-specific risks),
- Data governance documentation (sources, labeling, quality metrics, bias mitigation),
- Model development and validation reports (performance, robustness, explainability),
- Human oversight design and user instructions,
- Logging and monitoring plan for post-market surveillance,
- Cybersecurity and privacy controls.
4. Conformity Assessment
Select the appropriate conformity assessment module. If a notified body is required, engage early and provide a clear mapping to standards. For self-declaration, ensure internal audit and review processes are robust. Keep records of test results, reviews, and decisions regarding deviations from standards.
5. Procurement and Contracting
When procuring AI-enabled systems or components, specify required standards in tender documents and contracts. For public sector buyers, align requirements with the AI Act’s obligations and relevant cybersecurity frameworks. For suppliers, require evidence of compliance and the right to audit or review technical documentation.
6. Post-Market and Lifecycle Management
Implement continuous monitoring to detect performance degradation, data drift, adversarial threats, and misuse. Update risk assessments and technical documentation as standards evolve or as the system’s operating context changes. Ensure that updates to standards are reflected in internal processes and that changes are managed under a controlled change management process.
Interpreting Standards for AI-Enabled Systems
Many existing standards were developed before AI became widespread. They often assume deterministic behavior and well-defined interfaces. Applying them to AI requires careful interpretation. For example:
- EN ISO 13849 (safety-related control systems) defines Performance Levels (PL) based on architecture and diagnostic coverage. If an AI component influences safety functions, the manufacturer must determine how to classify its contribution and what diagnostic measures are appropriate. This may involve using simulation-based testing, diversity in sensor inputs, or fallback logic.
- IEC 61508 requires verification and validation of safety-related software. For AI, this may include dataset validation, adversarial testing, and statistical performance evaluation. The standard does not prescribe specific AI techniques, but it does require evidence that the system meets its safety requirements under foreseeable conditions.
- ISO 14971 requires risk analysis that includes both harm and probability. For AI, probability of failure may be difficult to quantify due to data dependencies and context sensitivity. Providers may need to use scenario-based analysis and operational constraints to manage residual risk.
- ISO/IEC 23894 helps structure AI risk management, but it does not replace sector-specific risk requirements. It should be used in conjunction with applicable sectoral standards.
Interpretation should be documented. When standards are ambiguous or incomplete, providers should record their rationale, reference relevant guidance, and implement compensating controls. This documentation will be critical during audits and conformity assessments.
Cross-Border Considerations and National Implementation
While EU legislation harmonizes many requirements, national implementation and market surveillance practices can vary. Some countries have stricter enforcement or specific expectations for documentation. For example:
- Germany: Strong emphasis on functional safety and the Machinery Directive; conformity assessment often involves rigorous testing and documentation. The German national standard
