< All Topics
Print

AI + Robotics: Dual-Compliance Traps Across Jurisdictions

Integrating artificial intelligence into physical robotics creates a complex regulatory topology where software obligations and hardware liabilities converge. A standalone software system may fail due to a biased output; a robot powered by that same system may fail by causing physical harm, property damage, or systemic disruption. For product teams operating in Europe, this convergence is not theoretical. It is a structural reality that requires navigating two parallel regimes: the horizontal obligations of the Artificial Intelligence Act (AI Act) and the vertical responsibilities embedded in the EU’s product safety and machinery frameworks. This article explains how these regimes interact in practice, contrasts them with the sectoral approach in the United States, the security-centric model in China, and selected APAC practices, and provides a practical dual-compliance checklist for engineering, legal, and governance teams.

The EU Dual-Regime Reality: When a Robot Becomes Both Product and System

In the European Union, adding AI to a robot does not replace traditional product compliance; it layers additional obligations on top. A mobile robot used in logistics, a surgical assistant, or a collaborative manufacturing arm is simultaneously a product placed on the market and a potentially high-risk AI system under the AI Act. This dual status triggers two sets of duties: one concerning the safety and conformity of the hardware and its embedded software under product legislation, and another concerning the trustworthiness, transparency, and governance of the AI functions under the AI Act.

Product Safety and Machinery Legislation

Most robots fall within the scope of the Machinery Regulation (EU) 2023/1230, which applies from January 2027, replacing the Machinery Directive. The Regulation mandates that machinery be designed and constructed so that it is safe for its intended use. It addresses mechanical risks, electrical safety, control systems, and the integration of safety components. For machinery that incorporates AI, the Regulation anticipates the interplay with the AI Act, notably where AI functions are used for safety-related tasks. If an AI component performs a safety function (for example, real-time obstacle avoidance or collision prediction), it must meet the applicable essential requirements for control systems and reliability. The manufacturer must assess conformity through appropriate modules, which may involve a notified body if safety components are involved or if the machinery presents higher risks.

Separately, the Product Liability Directive (PLD) is being replaced by the Product Liability Directive (Directive (EU) 2024/…) and complemented by the AI Liability Directive (proposed). The revised PLD explicitly includes software and AI systems as products. It also introduces a presumption of defectiveness where a manufacturer fails to comply with mandatory safety requirements or where the product’s lack of conformity made the damage easier to occur. For robotics, this means that non-compliance with the AI Act’s conformity assessment for high-risk systems could be used as evidence of defectiveness in civil liability claims. The AI Liability Directive, if adopted, will further ease the burden of proof for claimants where AI causally contributes to harm, particularly in cases of non-compliance with the AI Act.

AI Act Obligations for Robotics

Most AI-enabled robots will be classified as high-risk AI systems under Annex III of the AI Act, specifically in the areas of “robots and toys” when they are safety components, and in employment, safety critical infrastructure, and biometrics. The obligations for high-risk systems are extensive and operational:

  • Risk Management System: Continuous, iterative process covering identification, analysis, and mitigation of risks throughout the robot’s lifecycle.
  • Data and Data Governance: Training, validation, and testing data must be relevant, representative, free of errors, and complete where reasonably possible.
  • Technical Documentation: Comprehensive documentation covering system design, development, testing, and risk management, capable of demonstrating conformity.
  • Record-Keeping: Automatic logging of events (e.g., safety overrides, sensor failures, human interventions) to ensure traceability.
  • Transparency and Provision of Information: Clear instructions for use, including intended purpose, limitations, and human oversight measures.
  • Human Oversight: Measures enabling human intervention or override, appropriate to the risk and context (e.g., a collaborative robot with a physical stop button or a supervisory dashboard).
  • Accuracy, Robustness, and Cybersecurity: The system must achieve appropriate performance metrics and resist errors, perturbations, and unauthorized access.
  • Conformity Assessment: Depending on risk level, internal control or third-party notified body involvement is required.
  • Quality Management System (QMS): A documented QMS covering design, development, testing, and post-market surveillance, aligned with ISO 9001 principles and sector-specific standards.
  • Post-Market Monitoring: Continuous collection and analysis of performance data in the field, with reporting of serious incidents to national authorities.

For robotics, the practical challenge lies in mapping these obligations to existing engineering workflows. For example, “human oversight” in a high-speed packaging robot may be limited to supervisory control and safe stop functions, whereas a surgical robot requires meaningful human decision-making at critical steps. The AI Act requires that oversight be appropriate to the context and risk profile.

Conformity Assessments and the CE Marking

When a robot is both machinery and a high-risk AI system, the conformity assessment must address both regimes. The manufacturer may need to:

  • Undergo an AI Act conformity assessment, potentially involving a notified body for certain high-risk AI systems (depending on the specific Annex III use case and applicable harmonised standards).
  • Apply machinery conformity assessment modules, ensuring that safety functions (including those implemented by AI) meet essential requirements.
  • Issue a Declaration of Conformity referencing both the AI Act and relevant product legislation (e.g., Machinery Regulation, Radio Equipment Directive if wireless, EMC Directive, Low Voltage Directive).
  • Apply the CE marking.

It is important to note that the AI Act does not replace product legislation. Instead, it adds a horizontal layer. A robot may be compliant with the Machinery Regulation but still non-compliant with the AI Act if, for example, its data governance or risk management processes are inadequate. Conversely, robust AI governance does not excuse non-compliance with mechanical safety requirements.

National Implementation and Market Surveillance

While the AI Act is a Regulation (directly applicable across the EU), its enforcement is national. Each Member State designates market surveillance authorities for AI systems and for products. In practice, a robot sold in Germany may be overseen by the Federal Network Agency (BNetzA) for AI aspects and by the regional Gewerbeaufsichtsamt (industrial safety authority) for machinery safety. In France, the DGCCRF may handle product safety, while the French AI regulator (to be designated) handles AI compliance. Companies must be prepared for divergent enforcement styles: some authorities will be technical and collaborative, others more punitive. Reporting obligations for serious incidents under the AI Act (within 15 days of becoming aware) and under the Machinery Regulation (where there presents a serious risk) may involve different templates and channels.

Timelines and Transitional Provisions

There is a phased timeline that product teams must internalize:

  • 2025: AI Act obligations for GPAIs and general-purpose models begin (February 2025 for models released after August 2025; August 2025 for models released before). This affects foundation models that may be embedded in robotics platforms.
  • 2026: Codes of practice for general-purpose AI models are expected (by August 2026).
  • 2027: Full application of the AI Act for high-risk systems (August 2027), and application of the Machinery Regulation (January 2027).

During the transition, early adoption of harmonised standards (e.g., expected updates to EN ISO 12100, EN ISO 13849, and new standards addressing AI safety) is advisable to demonstrate presumption of conformity.

United States: Sectoral and Safety-Focused Approach

The United States does not have a horizontal AI Act equivalent. Instead, AI governance is sectoral and risk-based, anchored by existing regulators and statutes. For robotics, this means compliance is driven by the product’s domain and the agency with jurisdiction.

Key Regulatory Touchpoints

  • Food and Drug Administration (FDA): Software as a Medical Device (SaMD) and AI/ML-enabled medical devices require premarket submission (510(k), De Novo, or PMA). The FDA’s Predetermined Change Control Plan (PCCP) allows manufacturers to propose updates to AI models within approved bounds without new submissions for each change.
  • National Highway Traffic Safety Administration (NHTSA): For autonomous vehicles, NHTSA enforces safety standards and requires reporting of crashes. The Standing General Order (SGO) mandates reporting of crashes involving Level 2 ADAS and higher. The agency also issues recalls where safety defects are identified.
  • Occupational Safety and Health Administration (OSHA): Workplace robotics must meet general duty and machine guarding requirements. Collaborative robots often rely on consensus standards (e.g., ANSI/RIA R15.06, which maps to ISO 10218) to demonstrate safety.
  • Federal Trade Commission (FTC): The FTC enforces against deceptive practices and algorithmic bias that harms consumers. While not a dedicated AI regulator, it has shown willingness to act on data practices and algorithmic accountability.
  • Consumer Product Safety Commission (CPSC): For consumer robots (e.g., vacuums, toys), the CPSC enforces safety standards and can require recalls for hazards.
  • Department of Commerce/BIS: Export controls on advanced AI and robotics technologies, particularly where dual-use is suspected.

At the federal level, the NIST AI Risk Management Framework (AI RMF) provides voluntary guidance on trustworthy AI development. It is not binding law but is widely referenced in procurement and contracting. Several states (e.g., California, Illinois, Colorado) have enacted laws addressing algorithmic transparency, bias, or automated decision-making in employment and insurance. The patchwork nature means that a robot sold nationwide may need to comply with multiple state-level requirements in addition to sectoral federal rules.

Practical Implications for Robotics Teams

For robotics developers, US compliance is less about a single conformity assessment and more about:

  • Mapping the robot’s functions to the relevant regulator (medical, vehicle, industrial, consumer).
  • Preparing for premarket reviews where applicable (e.g., FDA) or incident reporting (NHTSA).
  • Adopting consensus standards to demonstrate due care in tort liability.
  • Documenting model updates and change control, particularly for AI components that learn or adapt.

Unlike the EU’s AI Act, there is no general requirement for a QMS or post-market monitoring system across all AI systems. However, sectoral rules often impose similar duties indirectly (e.g., FDA’s PCCP, NHTSA’s defect reporting). Liability risk in the US is primarily driven by product liability tort law, where adherence to standards and robust documentation are key defenses.

China: Security, Certification, and Data Governance

China’s approach to AI and robotics centers on security, data governance, and state oversight. The regulatory framework is evolving rapidly and is more prescriptive in certain areas than the EU or US.

Key Regulatory Elements

  • Algorithmic Recommendations and Deep Synthesis: Regulations on algorithmic recommendations and deep synthesis (e.g., deepfakes) require transparency, labeling, and filing with the Cyberspace Administration of China (CAC). For robotics, this can apply to recommendation systems guiding robot behavior or synthetic media interfaces.
  • Generative AI Measures: Providers of generative AI services must comply with content security requirements, labeling obligations, and data protection rules. Services accessible to the public require security assessments and filings.
  • Data Security Law (DSL) and Personal Information Protection Law (PIPL): Strict controls on data processing, cross-border transfers, and sensitive data categories. Robotics that collect biometric or location data face heightened compliance.
  • Robot Safety Standards: China has national standards for industrial robot safety (mapping to ISO 10218) and collaborative robot safety. Compliance is often tied to mandatory product certification schemes (CCC) for certain categories.
  • Autonomous Driving Pilots and Local Rules: Cities like Beijing, Shanghai, and Shenzhen have issued local regulations for autonomous driving testing and operations, with permit requirements and liability frameworks.

For robotics, the key difference from the EU is the emphasis on security reviews and content controls, alongside traditional safety certification. Cross-border data flows are a significant constraint: training data transfer and telemetry from robots operating in China may require localization or specific transfer mechanisms.

Selected APAC Practices: A Spectrum of Approaches

APAC countries are adopting diverse strategies, ranging from guidance-based models to targeted legislation.

Singapore

Singapore’s Model AI Governance Framework provides practical guidance for private sector adoption. It is not legally binding but influences procurement and best practices. For robotics, Singapore emphasizes human oversight, explainability where feasible, and robust data management. The country also has strong product safety regimes under the Consumer Protection (Fair Trading) Act and sectoral regulators for industrial safety.

Japan

Japan favors a principles-based approach, with the Social Principles of Human-Centric AI and sectoral guidelines. The Act on the Protection of Personal Information (APPI) governs data. For robotics, Japan relies on voluntary standards and industry codes, with a strong emphasis on safety in manufacturing and eldercare robotics. The government encourages regulatory sandboxes to test new AI-enabled products.

South Korea

South Korea enacted the Basic Act on AI (passed in December 2024, effective 2026). It establishes a horizontal framework for AI safety and transparency, with obligations for high-impact AI and providers of AI products. For robotics, this will add AI-specific duties on top of existing product safety and industrial safety laws. The Act emphasizes risk assessments, transparency, and human oversight, aligning in many respects with the EU AI Act but with a distinct national enforcement structure.

Australia

Australia’s approach remains largely voluntary, with the AI Ethics Principles and guidance from the Office of the Australian Information Commissioner on privacy. Product safety laws apply to robotics, and there is ongoing policy work on AI regulation. For now, compliance focuses on privacy, consumer protection, and safety standards.

Cross-Border Compliance Strategy: From Principles to Practice

Operating across these jurisdictions requires a strategy that is both modular and integrated. A purely jurisdiction-specific approach leads to duplication and fragility. A more robust approach is to build a “highest common denominator” baseline and then add jurisdiction-specific modules.

Comparative Lens: EU vs US vs China

  • Scope: The EU applies a horizontal AI Act to all high-risk systems; the US is sectoral; China emphasizes security and data.
  • Conformity: EU requires documented QMS, conformity assessment, and CE marking; US relies on sectoral premarket reviews and standards; China uses certification and filing requirements.
  • Liability: EU introduces presumptions of defectiveness tied to non-compliance; US relies on tort law and standards adherence; China’s liability framework is evolving but tied to safety certification and data security.
  • Data: EU has GDPR and data minimization; US has sectoral privacy laws; China has DSL/PIPL with strict cross-border controls.
  • Updates: EU requires post-market monitoring and incident reporting; US uses sectoral reporting (e.g., NHTSA, FDA PCCP); China requires security assessments for public-facing AI updates.

Dual-Compliance Checklist for Robotics Product Teams

The following checklist is designed for teams building AI-enabled robots intended for multi-jurisdictional deployment. It integrates legal, engineering, and operational tasks.

1. Documentation and Governance

  • Establish a Quality Management System that covers both product safety and AI governance. Map QMS procedures to ISO 9001, ISO 13485 (if medical), and expected AI Act requirements (risk management, data governance, change control).
  • Prepare a Technical Documentation Set that includes:
    • System architecture and intended purpose.
    • Risk analysis (ISO 12100 and AI Act risk management).
    • Data governance statement (sources, cleaning, labeling, bias mitigation).
    • Performance metrics, test protocols, and validation reports.
    • Human oversight design rationale and user instructions.
    • Cybersecurity measures and threat modeling.
  • Draft a Declaration of Conformity referencing applicable EU legislation (AI Act, Machinery Regulation, RED, EMC, LVD) and US/China standards where relevant.
  • Implement Record-Keeping for AI events and safety incidents, with retention periods aligned to EU (min. 2 years post-market) and sectoral US requirements.

2. Testing and Validation

  • Conduct Conformity Assessments
Table of Contents
Go to Top