Liability When Biotech Software Fails: Diagnostics, Decision Support, and Harm
When a software system designed to assist in medical diagnostics or therapeutic decision-making fails, the consequences can be immediate and severe. Unlike a mechanical failure in a surgical robot or a defective implant, the failure of a diagnostic algorithm or a clinical decision support system (CDSS) often manifests as an erroneous recommendation, a missed pattern, or a biased output that leads to incorrect treatment, delayed diagnosis, or unnecessary medical intervention. The central question for professionals in biotechnology, healthcare, and regulatory affairs is not merely what went wrong technically, but who bears legal responsibility for the resulting harm. The European legal landscape for such scenarios is a complex tapestry woven from product liability principles, medical device regulations, data protection laws, and national civil codes. Understanding this landscape requires dissecting the roles of various actors, the nature of the “product” in question, and the specific obligations imposed by the EU’s evolving regulatory frameworks.
The Regulatory Ecosystem: Defining the “Product” and the “Provider”
The first step in any liability analysis is to determine the applicable legal regime. For biotech software, the primary framework is the Medical Devices Regulation (MDR) (EU) 2017/745 and the In Vitro Diagnostic Medical Devices Regulation (IVDR) (EU) 2017/746. These regulations define the scope of what constitutes a medical device and, crucially, include software within that definition.
Software as a Medical Device (SaMD)
Under the MDR and IVDR, software intended to be used for a “medical purpose” as defined in Article 2(1) of the MDR is a medical device. This includes software that analyzes medical data, such as images from an MRI, genomic sequences, or patient vitals, to support diagnosis or treatment decisions. The critical distinction is between software that is an integral part of a hardware device (e.g., the software controlling an infusion pump) and standalone software that performs a medical function on its own. The latter, often termed Software as a Medical Device (SaMD), falls squarely under the MDR/IVDR.
The regulations classify devices based on risk, from Class I (low risk) to Class III (high risk). Most diagnostic and decision-support software will be classified as Class IIa, IIb, or III, depending on the severity of the disease it addresses and the impact of its decisions. For example, software that drives diagnostic or therapeutic decisions is typically Class IIb or higher. This classification is not merely administrative; it dictates the conformity assessment procedure (involving a Notified Body for higher classes), the quality management system requirements, and the post-market surveillance obligations. The manufacturer’s obligations under the MDR/IVDR are stringent and non-negotiable. They must demonstrate the safety and performance of the device, conduct a clinical evaluation, establish a quality management system, and monitor the device throughout its lifecycle.
Provider vs. Deployer: A Critical Distinction
The MDR/IVDR primarily regulate the “manufacturer” (the entity developing and placing the device on the market) and the “authorized representative.” However, in the operational context of a hospital or clinic, other actors emerge: the “deployer” (the healthcare institution integrating the software into its clinical workflows) and the “operator” (the clinician using the software to make decisions). The liability chain connects these roles.
Provider (Manufacturer): This is the entity that designs, develops, and commercializes the software. Under the MDR, the manufacturer is responsible for ensuring the device is safe and performs as intended. This includes addressing risks associated with reasonably foreseeable misuse and ensuring the software is robust against known cybersecurity threats. If a diagnostic algorithm is trained on biased data, leading to lower accuracy for certain demographic groups, the manufacturer is likely the primary target for liability, as the defect lies in the fundamental design or validation of the device.
Deployer (Healthcare Institution): The hospital or clinic that procures and implements the software is the deployer. While not the manufacturer, the deployer has its own set of obligations. Under the MDR, they must use the device in accordance with its intended purpose and the manufacturer’s instructions. They are also responsible for ensuring that the device is properly maintained and that their staff is adequately trained. If a hospital uses a diagnostic tool for a purpose not indicated by the manufacturer, or fails to update the software, leading to a failure, the deployer may share liability. The EU’s AI Act introduces specific obligations for “deployers” of high-risk AI systems, including conducting a fundamental rights impact assessment and ensuring human oversight.
Operator (Clinician): The individual healthcare professional using the software is the operator. Their liability is generally governed by national tort law and professional standards of care. A clinician is not expected to understand the algorithmic intricacies of the software, but they are expected to exercise professional judgment. If a clinician blindly follows a clearly erroneous software recommendation without applying their own expertise, they may be found negligent. Conversely, if the software provided a misleading output that would deceive a reasonably competent professional, the liability shifts back to the provider or deployer.
Grounds for Liability: Defects, Negligence, and Breach of Duty
Liability for harm caused by biotech software can arise from several legal grounds, often overlapping. The most relevant are product liability based on a “defect,” liability for negligence, and breach of regulatory duties.
Product Liability and the “Defect”
The EU’s Product Liability Directive (PLD) 85/374/EEC (soon to be replaced by the new PLD Regulation) establishes a strict liability regime for defective products. This means a victim does not need to prove negligence by the manufacturer, only that the product was defective and that defect caused the harm. For software, defining a “defect” is nuanced. The PLD considers a product defective when it does not provide the safety which a person is entitled to expect, taking all circumstances into account. These circumstances include the presentation of the product, the use reasonably expected of it, and the time it was put into circulation.
For AI-driven software, a defect could manifest in several ways:
- Design Defect: The core algorithm is flawed, or the training data is biased, leading to systematic errors. For instance, a diagnostic model trained predominantly on data from one ethnic group may be defective for others.
- Manufacturing Defect: This is less common for pure software but could occur if a specific version of the software deployed at a hospital is corrupted or differs from the validated version.
- Information Defect (Failure to Warn): The software is accompanied by inadequate instructions for use, insufficient warnings about its limitations, or unclear information about the level of confidence in its output. The “intended purpose” defined by the manufacturer is critical here. If the manufacturer markets a tool for “research use only” but it is used for diagnosis, the liability may shift.
A key challenge with AI is the “black box” problem. If the cause of a failure cannot be pinpointed due to the complexity of the model, does this preclude a finding of defect? The legal consensus is no. The focus remains on the output and the safety expectations. If the system consistently produces unsafe outputs, it is likely defective, regardless of whether the exact internal reasoning is understood.
Negligence and Professional Duty of Care
Beyond strict product liability, liability can arise from negligence. This requires a breach of a duty of care. For the manufacturer, this aligns with the obligations under the MDR/IVDR. Failure to conduct a proper clinical evaluation, to implement adequate cybersecurity measures, or to provide clear instructions can constitute negligence. The MDR explicitly requires manufacturers to establish, document, implement, and maintain a post-market surveillance system. Failure to act on post-market data indicating a performance issue can be a powerful basis for liability.
For the deployer (hospital), negligence can arise from a failure in their organizational processes. This includes:
- Inadequate validation of the software before procurement.
- Failure to train staff on the software’s capabilities and limitations.
- Integrating the software into workflows in a way that circumvents necessary human checks.
For the operator (clinician), negligence is the failure to meet the standard of a reasonably competent peer. This is a high bar. A clinician is not an IT expert. However, they are expected to recognize “red flags” or outputs that are clinically implausible. The legal tension lies in balancing the trust placed in advanced technology with the professional duty to remain the ultimate decision-maker.
Breach of Regulatory Duties as a Basis for Liability
Increasingly, a breach of specific regulatory duties under the MDR, IVDR, or the upcoming AI Act can serve as evidence of fault or even give rise to a presumption of defectiveness. The new AI Act introduces harmonized rules for AI systems, including those used in healthcare. For high-risk AI systems (which most diagnostic and CDSS will be), the Act imposes strict requirements regarding data quality, transparency, human oversight, robustness, and accuracy. A failure to meet these requirements could be used in court to establish liability. For example, if a provider fails to ensure the training data is “free of errors and complete” as required by the AI Act, and this leads to a diagnostic failure, that breach is strong evidence of a design defect.
Practical Scenarios and Allocation of Liability
To illustrate how these principles work in practice, consider two common scenarios in biotech software deployment.
Scenario 1: The AI-Powered Radiology Assistant
A hospital deploys an AI software tool (Class IIb medical device) that flags potential nodules on chest X-rays for further review by a radiologist. The software is designed to increase efficiency and reduce missed diagnoses. A false negative occurs: the AI fails to flag a malignant nodule, and the radiologist, reviewing a high volume of scans, also misses it. The patient’s cancer progresses.
Analysis:
- Provider (AI Company): The primary question is whether the AI’s performance met the claims made in its clinical evaluation and whether it was safe. If the AI’s sensitivity was below the state of the art or if the training data was not representative of the hospital’s patient population, the provider could be liable for a design defect. The provider must also prove they provided adequate instructions on the need for human oversight.
- Deployer (Hospital): The hospital must demonstrate it procured a validated device, provided adequate training to radiologists on its use and limitations, and established a workflow that allows for effective human-AI collaboration. If the hospital’s workflow pressured radiologists to spend only a few seconds on each scan, relying heavily on the AI, the hospital’s organizational choices could be a contributing cause of the harm.
- Operator (Radiologist): The radiologist is expected to apply their professional judgment. If the nodule was clearly visible and the radiologist simply failed to look properly, they may be negligent. However, if the AI’s false negative created a “automation bias” where the radiologist was less critical, the situation is more complex. The legal outcome would likely depend on expert testimony on the standard of care and the influence of the AI tool.
Scenario 2: Genomic Software for Personalized Medicine
A software platform analyzes a patient’s genomic data to recommend specific cancer therapies. The software recommends a therapy based on a genetic marker. However, the software’s knowledge base is outdated and fails to account for new research indicating that the marker is no longer a reliable predictor for that specific cancer type. The patient receives an ineffective therapy with severe side effects.
Analysis:
- Provider: This is a clear case of a post-market surveillance failure. The provider has a continuous obligation to update the software and its underlying knowledge base to reflect the state of the art. Failure to do so renders the device defective. The liability is strong here, as the “information defect” is evident—the instructions and output were outdated.
- Deployer: The hospital may be liable if it was aware of the software’s limitations or if it failed to have a process for verifying therapeutic recommendations against current clinical guidelines. However, in the fast-moving field of genomics, it is reasonable for a hospital to rely on the software provider to maintain the system’s accuracy.
- Operator (Oncologist): The oncologist has a duty to stay informed. If the outdated recommendation contradicted the oncologist’s own knowledge, they should have questioned it. However, if the software presented the recommendation with a high degree of confidence and the oncologist had no reason to doubt it, the primary fault lies with the provider.
Risk-Reducing Design and Operational Choices
For professionals developing or deploying these systems, mitigating liability risk is intertwined with engineering for safety and compliance. The regulatory frameworks provide a roadmap for what constitutes responsible practice.
For Providers: Building a Defensible Product
The most effective defense against liability is a robust and well-documented development and validation process. This is not just a technical exercise but a legal one.
- Transparency and Explainability: While not always legally mandated, providing users with an understanding of the AI’s reasoning (Explainable AI – XAI) can mitigate risk. If a clinician understands why the software is making a recommendation, they are better equipped to validate it. The AI Act explicitly promotes transparency as a fundamental principle for high-risk AI.
- Rigorous Clinical Evaluation: The MDR/IVDR require clinical evidence to demonstrate safety and performance. This must be based on a clinical development plan that is methodologically sound. The data used for training, validation, and testing must be representative and of high quality. Documenting these steps meticulously is crucial for a defense.
- Human-in-the-Loop Design: Design the software to be a tool that supports, not replaces, human judgment. This means providing clear confidence scores, highlighting uncertainty, and designing user interfaces that encourage critical review rather than passive acceptance. The AI Act mandates that high-risk AI systems allow for human oversight.
- Cybersecurity by Design: A failure caused by a cyberattack can be considered a product defect if foreseeable and preventable. Security must be integrated from the outset, not bolted on as an afterthought.
For Deployers: Implementing a Safe Environment
Deployers must treat the introduction of advanced software as a significant organizational change, not just a procurement exercise.
- Due Diligence in Procurement: Hospitals must scrutinize the technical documentation, clinical evidence, and regulatory compliance of any software before purchase. They should ask for evidence of the manufacturer’s quality management system and post-market surveillance plans.
- Validation and Governance: Before going live, the software should be validated in the specific local environment. A governance framework should be established to monitor the software’s performance, manage incidents, and decide when to update or decommission the tool.
- Continuous Training: Training should not be a one-time event. It must be ongoing, covering not just the “how-to” but also the “what-if”—the known limitations, failure modes, and the importance of human oversight. This is a core requirement of the AI Act for deployers of high-risk systems.
The Evolving Landscape: The AI Act and the New Product Liability Directive
The European regulatory landscape is in a significant state of transition, which will further clarify and potentially expand liability avenues.
The AI Act’s Impact
The AI Act harmonizes rules for AI systems across the EU. For biotech software, its impact is profound. By classifying most diagnostic and CDSS as high-risk, the Act imposes a suite of pre-market and post-market obligations. A violation of the AI Act’s requirements can be used as evidence of non-compliance with the “state of the art” in a liability claim. The Act also explicitly addresses the “black box” issue by requiring that high-risk AI systems be designed to enable oversight and ensure transparency. This will make it harder for providers to hide behind algorithmic complexity.
The New Product Liability Directive (PLD) Regulation
The EU has agreed on a new PLD Regulation to replace the 1985 Directive. This update is crucial for software and AI. It explicitly expands the definition of “product” to include software and AI models. It also introduces a presumption of defectiveness under certain conditions, such as when a provider fails to comply with regulatory requirements (like those in the AI Act or MDR) or when it is impossible for the claimant to identify the specific cause of harm due to technical complexity. This significantly lowers the burden of proof for victims of AI-related harm. The new rules also clarify that “damage” includes medical costs and psychological harm, broadening the scope of compensable losses.
National Nuances and Cross-Border Considerations
While EU regulations provide a harmonized framework, the actual litigation and liability allocation are determined at the national level. The PLD is a directive, meaning it was transposed into national law by each Member State, leading to some variations in procedural rules and liability caps. The new PLD Regulation will harmonize this more directly, but national tort law principles will still govern the specifics of fault and causation.
For example, in Germany, the concept of Verkehrspflichten (duties of care) is highly developed and will be applied to assess the responsibilities of deployers and operators. In France, the concept of responsabilité du fait des produits défectueux has its own jurisprudential nuances. In the UK (post-Brexit), the approach will diverge, though it has transposed the original PLD and maintains a similar MDR framework. Professionals operating across Europe must be aware that while the core product safety principles are harmonized, the path to defending or
