Liability for AI, Biotech Software, and Robots Worldwide: Who Pays, and What Evidence Wins
Assigning responsibility when an autonomous system causes harm is no longer a theoretical exercise for legal scholars; it is a daily operational challenge for engineering teams, compliance officers, and insurers. The central question is not simply who is at fault, but which records prove or disprove causation and whether the responsible entity can pay. In practice, liability for AI, biotech software, and robots is a composite of product liability law, fault-based negligence, sector-specific safety rules, and contractual risk allocation, all of which hinge on evidence that must be generated, protected, and retained long before an incident occurs. This article analyzes how liability is assigned and what evidence wins in the European Union, the United States, the United Kingdom, China, and Japan/Korea, with a focus on design and governance patterns that meaningfully reduce exposure.
Foundations of Liability: From Fault to Strict Responsibility
Liability regimes differ along a spectrum from fault-based to strict liability. In fault-based systems, the claimant must prove a breach of a duty of care, causation, and damage. In strict liability regimes, particularly for defective products, the claimant need only show that the product was defective, caused harm, and was placed on the market. Across jurisdictions, AI and robotics often sit at the intersection of both: strict liability for placing a defective product on the market, and fault-based liability for negligent design, deployment, or post-market surveillance. Sector-specific regulations add layers of obligations that can become the benchmark for “reasonable care.”
Key Concepts in the EU
Under the EU’s revised Product Liability Directive (PLD), a producer is liable for damage caused by a defect in a product. The concept of defect focuses on the safety expectations the public is entitled to expect, taking into account all circumstances, including the product’s presentation, reasonably expected use, and the time it was put into circulation. For software and AI, the PLD explicitly covers software, including AI systems and digital manufacturing files. The PLD also introduces a presumption of defectiveness under certain conditions, such as when the producer failed to comply with relevant safety regulatory requirements or where the claimant faces significant difficulties in proving defectiveness due to technical complexity. Importantly, the PLD is complemented by the AI Liability Directive (AILD) proposal, which aims to harmonize rules for non-contractual damage caused by AI systems and to ease the burden of proof for victims through a presumption of causality and disclosure powers for courts.
At the regulatory level, the EU AI Act imposes obligations on providers, deployers, and other actors based on risk classification. High-risk AI systems must meet requirements on risk management, data governance, technical documentation, record-keeping, transparency, human oversight, and accuracy, robustness, and cybersecurity. Non-compliance can be used as evidence of fault and can trigger administrative fines, which may also influence civil liability assessments. In biotech, sector-specific rules such as the Medical Device Regulation (MDR) and In Vitro Diagnostic Regulation (IVDR) impose strict conformity assessment, clinical evidence, and post-market surveillance obligations. For robots, the Machinery Regulation and the General Product Safety Regulation set safety baselines.
US Approaches: Product Liability and Negligence
In the United States, liability is primarily governed by state law, with a mix of product liability (often strict) and negligence claims. The Restatement (Third) of Torts: Products Liability frames defect categories as manufacturing defects, design defects, and failure-to-warn defects. Many states apply a risk-utility test for design defects, which asks whether the product’s risks outweigh its benefits, often considering feasible alternatives. For software and AI, courts may treat algorithms as products or services, affecting whether strict product liability applies. The learned intermediary doctrine remains important in medical contexts, where manufacturers typically discharge warnings by informing healthcare providers rather than end patients. Federal regulations (e.g., FDA for medical devices) can set standards of care; violations may be used as evidence of negligence per se. Preemption issues arise when federal law displaces state tort claims, particularly for medical devices that have undergone premarket approval.
UK: Post-Brexit Nuances
The UK retains a product liability regime closely aligned with the EU’s pre-Brexit framework through the Consumer Protection Act 1987, which imposes strict liability for defective products. Negligence claims remain available for economic loss and personal injury. The UK’s approach to AI liability is evolving; while it has not transposed the EU AI Act, it has consulted on a pro-innovation regulatory framework that relies on existing regulators and principles, with potential for tailored legislation. Courts will look to industry standards and regulatory guidance as benchmarks for reasonable care. Insurance plays a significant role, with product liability policies commonly used, though exclusions for AI-related risks are increasingly scrutinized.
China: A Strict, State-Guided Regime
China’s Product Liability Law imposes strict liability on producers for defects that cause harm. The Tort Liability Law and Civil Code further clarify that where a product defect causes harm, the victim may claim from the producer or seller, with the producer bearing fault if the defect is attributable to design or production. China has introduced specific regulations for algorithmic recommendations and generative AI, requiring disclosure, content moderation, and safety assessments. In biotech, strict rules govern clinical trials and genetic data, with heavy penalties for non-compliance. Evidence expectations are high: producers must maintain detailed records of design, testing, and ongoing monitoring. State involvement is more direct than in Western jurisdictions, and administrative penalties can be severe, influencing civil liability.
Japan and Korea: Civil Code and Sectoral Safety
Japan’s Civil Code imposes liability for negligence, while the Product Liability Act imposes strict liability for defects that cause injury or death. Courts require proof of defect, causation, and damage; however, the doctrine of “defect” focuses on lack of safety expected in the product given its purpose and ordinary use. Japan’s approach to AI governance is risk-based and sectoral, with guidelines for AI use cases and safety standards for robotics. Korea’s Product Liability Act also imposes strict liability, with courts increasingly attentive to software defects. Korea’s AI Framework Act and sector-specific rules emphasize safety, transparency, and data governance. In both countries, insurers are developing specialized AI coverage, and regulators encourage voluntary standards to mitigate liability.
What Evidence Wins: Logs, Change Control, Validation, and Training Records
Liability turns on evidence. In practice, the winning evidence package combines technical records, governance artifacts, and operational logs that collectively tell the story of how a system was designed, tested, deployed, and maintained. The following elements are consistently decisive across jurisdictions.
System Logs and Telemetry
Comprehensive, tamper-evident logs are foundational. They should capture inputs, outputs, timestamps, system states, error codes, and environmental conditions. For robots, this includes sensor readings, actuator commands, and safety interlocks. For AI systems, it includes inference requests, model versions, feature vectors (or anonymized equivalents), and confidence scores. Logs must be retained in a manner that ensures integrity (e.g., cryptographic hashing) and chain of custody. In the EU, the AI Act’s record-keeping obligations for high-risk systems explicitly require logging to ensure traceability and oversight. In the US, courts often treat detailed logs as strong evidence of what happened and whether a system performed as intended. In China, regulators expect real-time monitoring and retention aligned with data localization rules. In Japan and Korea, robust logging is considered a hallmark of due diligence.
Change Management and Version Control
Proving that a system was not defective at the time of placement on the market requires a clear record of changes. This includes version control for code and models, change requests, approvals, test results, and rollback plans. A well-documented change control process demonstrates that risks were evaluated before updates were deployed. For AI, this extends to dataset versions, labeling guidelines, and model training runs. In biotech software, change control must align with regulated quality management systems (e.g., ISO 13485, GxP). In the EU, the PLD’s focus on the state of the art at the time of placing on the market makes change history critical to show that the producer kept pace with safety knowledge. In the US, a failure to follow internal change control procedures can be used to establish negligence.
Validation, Verification, and Testing
Validation evidence demonstrates that the system meets its intended purpose and is safe under reasonably foreseeable conditions. This includes unit tests, integration tests, system validation, and scenario-based testing for edge cases. For robots, safety validation should cover human-robot interaction, fail-safe behaviors, and environmental hazards. For AI, this includes accuracy, robustness, calibration, and bias testing across representative datasets. For biotech software, clinical validation and performance evaluation under the MDR/IVDR are essential. In the EU, the AI Act requires robustness and accuracy testing, and conformity assessments for high-risk systems. In the US, FDA guidance for software as a medical device (SaMD) emphasizes clinical validation and real-world performance monitoring. In China, safety assessments for generative AI and algorithmic recommendations are mandatory. In Japan and Korea, validation aligned with national standards and international norms (e.g., IEC standards for robotics) is persuasive.
Training Data Governance and Records
For AI systems, training data is often the crux of defectiveness claims. Records should document data provenance, labeling procedures, consent and licensing, data minimization, and measures to mitigate bias. In the EU, the AI Act’s data governance requirements mandate training, validation, and testing data to be relevant, representative, free of errors, and complete. The GDPR imposes additional constraints on personal data use, and violations can be used as evidence of non-compliance. In the US, courts may consider whether the training data was appropriate for the intended use and whether the producer ignored known biases. In China, data localization and content safety requirements impose additional documentation burdens. In Japan and Korea, adherence to privacy laws and ethical guidelines is increasingly viewed as part of safety governance.
Human Oversight and Operator Training
Records showing that operators were trained, that meaningful human oversight was available, and that safety instructions were clear can reduce liability, particularly for deployers. For high-risk AI, the EU AI Act requires effective human oversight; failure to provide it can be evidence of defect or negligence. In the US, inadequate warnings or training can support failure-to-warn claims. In biotech, operator competency records and informed consent documentation are critical.
Incident Reporting and Post-Market Surveillance
Post-market surveillance is a strong indicator of responsible governance. In the EU, the AI Act requires reporting of serious incidents; the MDR/IVDR impose vigilance reporting. In the US, FDA’s Medical Device Reporting (MDR) system requires manufacturers to report adverse events. In China, incident reporting obligations are expanding for AI services. In Japan and Korea, sectoral regulators encourage or require incident reporting. A documented process for investigating incidents, root cause analysis, and corrective actions demonstrates that the producer took safety seriously and can mitigate punitive damages.
Insurance Norms and Risk Transfer
Insurance is a critical component of liability management. Traditional product liability policies may cover bodily injury and property damage caused by a defective product, but exclusions for professional services, cyber incidents, and algorithmic errors are common. Specialized AI liability policies are emerging, often combining elements of cyber, tech E&O, and product liability. In the EU, the AI Liability Directive’s presumption of causality may increase insurers’ exposure, prompting stricter underwriting and demands for governance evidence. In the US, insurers are narrowing coverage for autonomous systems and requiring detailed risk assessments. In the UK, product liability insurers increasingly ask for AI risk questionnaires. In China, state-backed insurance schemes may be required for high-risk applications. In Japan and Korea, insurers collaborate with standards bodies to define insurability criteria.
From a risk transfer perspective, contracts matter. Indemnity clauses, limitation of liability, and warranty terms can allocate risk between producers, integrators, and deployers. However, statutory strict liability regimes often cannot be fully contracted out of, particularly for consumer harm. Insurance should be aligned with the risk profile of the application, the regulatory classification, and the governance evidence available.
Design and Governance Patterns that Reduce Liability Exposure
Reducing liability is not only about reacting to incidents; it is about designing systems and governance processes that prevent harm and produce defensible evidence. The following patterns are consistently effective.
Risk-Based Classification and Documentation
Classify systems according to regulatory frameworks early (e.g., EU AI Act risk levels, FDA device classifications). Build documentation artifacts that match the classification. For high-risk systems, maintain a technical file, risk management file, and conformity assessment records. This creates a baseline for demonstrating compliance and can trigger presumptions of non-defectiveness under the PLD when aligned with harmonized standards.
Safety by Design and Human Oversight
Embed safety controls at the system level: fail-safes, redundancy, interpretable outputs, and clear user interfaces. For AI, implement confidence thresholds, abstention mechanisms, and escalation paths to human operators. For robots, ensure physical safety features and collision avoidance. Document that these controls were part of the design from the outset and validated. In the EU, the AI Act’s human oversight requirement is not a box to tick; it must be meaningful and demonstrable.
Data Governance and Bias Mitigation
Establish a data governance framework that covers collection, labeling, quality assurance, and bias detection. Use representative datasets and document the rationale for dataset composition. In the EU, the AI Act’s data governance obligations are explicit; in the US, ignoring known bias risks can be used to argue design defect. In China, content safety and ideological compliance are part of data governance. In Japan and Korea, alignment with privacy laws and ethical guidelines is expected.
Continuous Monitoring and Incident Response
Deploy monitoring tools that detect drift, anomalies, and safety-critical events. Maintain an incident response plan with clear roles, communication protocols, and root cause analysis procedures. In the EU, serious incident reporting is mandatory for high-risk AI; in the US, MDR reporting is required for medical devices; in China, incident reporting is expanding. Prompt, transparent reporting and corrective action can reduce punitive exposure.
Change Control with Risk Assessment
Require risk assessments for all changes, especially model updates and dataset refreshes. Use staged rollouts, shadow testing, and canary deployments. Document approvals and test results. This practice is particularly important in biotech, where changes can affect clinical performance and regulatory compliance.
Supplier and Integrator Management
For systems that integrate third-party components (e.g., foundation models, sensors), conduct due diligence, require warranties, and ensure access to necessary documentation. In the EU, the PLD allows injured parties to claim from any producer in the supply chain; robust supplier contracts and insurance can mitigate downstream risk. In the US, component part liability can attach to the final producer if they hold themselves out as the producer.
Regulatory Alignment and Standards
Adopt harmonized standards where available (e.g., ISO/IEC standards for AI risk management, IEC standards for robotics safety, ISO 13485 for medical devices). In the EU, presumption of conformity with the PLD can arise from compliance with harmonized standards. In the US, adherence to recognized standards supports a “reasonable care” argument. In China, compliance with national standards and mandatory assessments is essential. In Japan and Korea, voluntary standards are often treated as de facto benchmarks.
Region-Based Decision Guide for Deployment Risk
When deploying AI, biotech software, or robots across jurisdictions, risk decisions should be anchored in the local liability landscape, evidence expectations, and insurance norms. The following guide synthesizes practical considerations.
European Union
Deployers and producers should assume strict liability for defects under the PLD, with potential presumptions of defectiveness if safety regulatory obligations are not met. The AI Act imposes obligations that become evidence benchmarks; non-compliance increases liability risk. Maintain comprehensive technical documentation, record-keeping, and post-market surveillance. Insurance should cover product liability and be reviewed for AI exclusions. Contractual indemnities are useful but cannot fully displace statutory liability. Prioritize human oversight and robust logging. Expect courts to rely heavily on regulatory compliance as a standard of care.
United States
Expect a mix of strict product liability and negligence claims, with state-by-state variations. FDA regulations for medical devices can preempt certain state claims but also set a high bar for validation and reporting. Emphasize rigorous testing, clear warnings, and training. Insurance should be tailored to cover algorithmic risks and cyber events. Contractual risk allocation is more effective than in the EU, but consumer harm remains difficult to contract away. Document that design choices reflected a reasonable risk-utility balance.
United Kingdom
Strict liability under the Consumer Protection Act 1987 applies, with negligence claims available for certain losses. The UK’s AI regulatory approach is principles-based and sectoral; compliance with guidance and standards will be persuasive. Insurance is critical; ensure product liability coverage addresses AI-related harms. Contracts can allocate commercial risk, but statutory liability for defects remains. Maintain evidence of conformity with relevant safety standards.
China
Strict liability applies, with strong administrative oversight and data localization requirements. Algorithmic and generative AI regulations impose disclosure and safety obligations. Expect regulators to demand detailed evidence of data governance, content safety, and risk assessments. Insurance requirements may be mandated for certain applications. Contracts are subject to state policy priorities; non-compliance can lead to severe penalties that influence civil liability. Prioritize compliance with national standards and maintain meticulous records.
Japan and Korea
Strict liability for defects exists, with negligence claims also available. Courts look for adherence to safety standards and ethical guidelines. Insurance markets are developing specialized AI coverage; early engagement with insurers is advisable. Contracts can allocate risk, but statutory liability cannot be fully avoided. Emphasize validation aligned with national and international standards, robust logging, and post-market monitoring. In biotech, regulatory compliance with PMDA (Japan) or MFDS (Korea) is essential.
Practical Evidence Checklist for Legal Readiness
To be prepared for potential claims, organizations should maintain a living evidence package that includes:
- Technical documentation and risk management files aligned to regulatory classification.
- Comprehensive, tamper-evident logs of system behavior, inputs, outputs, and environmental conditions.
- Version-controlled change records with risk assessments, test results, and approvals.
