< All Topics
Print

Shared Responsibility: When Multiple Parties Contribute to Harm

Allocating responsibility for harm caused by artificial intelligence systems is one of the most complex challenges facing European legal and technical frameworks. Unlike traditional software, AI systems are often composed of a heterogeneous stack of models, data, infrastructure, and interfaces, assembled and operated by multiple independent actors. When a harmful outcome occurs—whether a discriminatory decision, a physical accident, or a financial loss—the chain of causality is rarely linear. It typically involves a model provider, a system integrator, a downstream deployer, and sometimes an end-operator. The European Union’s approach to this problem, particularly under the AI Act, introduces a layered liability regime that attempts to untangle these dependencies while ensuring victims are not left without recourse. This article examines how responsibility is allocated across these roles, what documentation mechanisms are necessary to evidence compliance, and how national courts may interpret these obligations in practice.

The Multi-Actor Architecture of AI Systems

Understanding responsibility begins with mapping the actors and their functions. In the European regulatory context, the AI Act (Regulation (EU) 2024/1689) defines several key roles, but these do not always align neatly with commercial or operational realities. A typical high-risk AI system in, for example, recruitment or credit scoring may involve: a provider who develops the model or the system; an integrator who combines the model with other components to create a functional application; a deployer (often the user) who puts the system into operation for a specific purpose; and an operator who interacts with it daily. In some cases, a distributor or importer may also be involved.

The AI Act defines the provider as the entity that develops an AI system or a general-purpose AI (GPI) model with a view to placing it on the market or putting it into service under its own name or trademark. This is a critical definition because it triggers the majority of compliance obligations, including conformity assessments, risk management systems, and post-market monitoring. However, many real-world systems are built on top of third-party models. A company using a licensed large language model (LLM) to build a customer service chatbot is, in most cases, the provider of that specific AI system, even though it did not train the underlying model. The model provider remains a provider of the GPAI model, but the system provider bears the burden of ensuring the composite system meets the requirements for its intended use.

The deployer is defined as any natural or legal person using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity. This is a broad category that includes public authorities, banks, insurers, and employers. The deployer’s obligations are generally lighter than the provider’s, focusing on human oversight, input data quality, and monitoring. However, the deployer can become a de facto responsible party if it significantly modifies the system’s intended purpose or fails to use it in accordance with the provider’s instructions.

Crucially, the AI Act does not explicitly define an integrator as a separate legal category. In practice, an integrator is often a provider of a high-risk system who has combined a third-party model with other software, hardware, and business logic. This lack of a distinct legal term can create ambiguity. For example, if a hospital integrates a third-party diagnostic model into its patient management system, is it a provider or a deployer? The answer depends on whether the hospital is placing the resulting system on the market or using it internally. If the hospital customizes the model and offers it to other clinics, it likely becomes a provider. If it only uses it for its own patients, it is a deployer. This distinction is not merely semantic; it determines the allocation of liability under the AI Act and related liability directives.

The EU Regulatory Framework: A Dual-Track Approach

European regulation of AI liability operates on two parallel tracks: the AI Act (a horizontal regulation setting safety requirements) and the Liability Directive (a proposal for harmonized rules on compensation for damage caused by AI products). These two instruments are designed to complement each other, but they address different stages of the legal process. The AI Act is primarily a preventive framework, establishing obligations before a system is placed on the market. The Liability Directive (and the revised Product Liability Directive) is a remedial framework, governing how victims can claim compensation after harm has occurred.

The AI Act’s Obligations and the Chain of Responsibility

Under the AI Act, the provider of a high-risk AI system must implement a risk management system, establish data governance practices, draw up technical documentation, and ensure the system undergoes a conformity assessment. They must also establish a quality management system and report serious incidents to national authorities. These obligations are intended to create a verifiable trail of due diligence. If a provider complies fully, it reduces its exposure to liability, but it does not eliminate it. The AI Act is a regulatory framework, not a shield against civil liability.

For deployers, the obligations are focused on ensuring human oversight, monitoring the system for risks, and using it in accordance with the instructions. A deployer who fails to do so may be held liable for negligence. For example, if a bank uses a credit scoring AI and ignores the provider’s warnings about data bias, and this results in discriminatory outcomes, the bank could be held responsible for failing to exercise proper oversight.

The AI Act also introduces specific rules for GPAI models. Providers of such models must provide documentation, including a summary of the content used for training, and ensure compliance with copyright law. If a GPAI model is considered to present a systemic risk, additional obligations apply, such as conducting model evaluations and reporting incidents. This is relevant for the responsibility chain because a downstream provider building a high-risk system on a GPAI model may rely on the model provider’s documentation to meet their own obligations. If that documentation is inadequate, both parties may share responsibility for any resulting harm.

The Liability Directive and the Presumption of Causality

The proposed Liability Directive addresses a key challenge in AI-related litigation: proving causality. In traditional tort law, a victim must prove that a specific action (or omission) by a defendant caused their damage. With AI, this can be extremely difficult, especially with complex, non-deterministic systems. The Liability Directive introduces a presumption of causality under certain conditions. If a victim can show that a defendant failed to comply with a relevant obligation under the AI Act (e.g., failed to conduct a conformity assessment) and that this failure likely contributed to the harm, the burden of proof shifts to the defendant to demonstrate that their failure was not the cause.

This presumption is not automatic. The victim must first demonstrate that the AI system behaved in a way that caused the damage and that the provider or deployer failed to meet their regulatory duties. It is a procedural tool designed to level the playing field, not a strict liability regime.

The revised Product Liability Directive (PLD) further expands the scope of “product” to include AI systems and digital manufacturing files. It also introduces rules on the liability of providers for updates and modifications. If a deployer modifies an AI system in a way that creates a new risk, the original provider may be released from liability, and the deployer may become the responsible party. This is particularly relevant for integrators who fine-tune or adapt third-party models.

Allocating Responsibility in Practice: Scenarios and Interpretations

To understand how these rules work in practice, it is useful to examine common scenarios involving multiple actors. The following examples illustrate how responsibility may be allocated under the AI Act and liability directives.

Scenario 1: Third-Party Model in a High-Risk System

A company develops a high-risk AI system for triaging emergency room patients. It uses a third-party foundation model as its core engine. The company fine-tunes the model on its own medical data and integrates it into a user interface designed for doctors.

Provider obligations: The company that places the triage system on the market is the provider of the high-risk AI system. It must ensure that the entire system, including the third-party model, complies with the AI Act. This includes verifying that the model provider has provided adequate documentation, assessing the model’s robustness in the context of the medical use case, and conducting a conformity assessment. The company cannot simply rely on the third-party model provider’s compliance; it is responsible for the final system.

Model provider obligations: The foundation model provider is a provider of a GPAI model. If the model is used in high-risk systems, it must provide technical documentation and cooperate with the downstream provider. If the model is deemed to pose systemic risks, it must conduct evaluations and report incidents. If the model provider fails to do this, and this failure contributes to harm, the Liability Directive’s presumption of causality could apply.

Deployer obligations: The hospital using the triage system is a deployer. It must ensure that medical staff use the system with appropriate human oversight and that input data is of sufficient quality. If a doctor overrides the AI’s recommendation without justification, and harm results, the hospital may be liable for failing to supervise properly.

Scenario 2: Open-Source Model Adapted by a System Integrator

A research institution releases an open-source computer vision model under a permissive license. A startup adapts this model for use in an autonomous delivery robot. The robot is later involved in an accident due to a failure to recognize a pedestrian in unusual clothing.

Provider obligations: The startup is the provider of the high-risk AI system (the robot). It must ensure that the adapted model is robust and safe for the intended environment. The fact that the model is open-source does not reduce these obligations. The startup must document how it adapted the model, what data it used for fine-tuning, and how it validated the system’s safety.

Open-source provider obligations: The original research institution is generally not considered a provider under the AI Act if it does not place the system on the market. However, if it actively promotes the model for safety-critical use without adequate warnings, it could face liability under national tort law. The AI Act’s obligations are triggered by commercial placement, not by the act of publishing code.

Operator obligations: The delivery service operator must follow the operational guidelines provided by the startup. If the operator disables safety features or uses the robot in unapproved conditions, it could be held liable for any resulting harm.

Scenario 3: AI-as-a-Service and the Deployer as Provider

A cloud provider offers a facial recognition API. A retail company uses this API to build a system for detecting shoplifters. The retail company configures the API, sets the confidence thresholds, and integrates it with its CCTV network.

Provider obligations: The cloud provider is the provider of the AI model (GPAI) and possibly the high-risk system if it offers a fully configured solution. If it only provides the API, the retail company may be considered the provider of the final system if it places it on the market (e.g., sells the shoplifting detection service to other retailers). If the retail company only uses it internally, it is a deployer.

Deployer obligations: As a deployer, the retail company must ensure that the system is used in compliance with GDPR and national law enforcement regulations. It must also ensure that staff are trained to interpret the system’s outputs and that false positives do not lead to unlawful detentions. Failure to do so could result in liability for discrimination or false imprisonment.

Documentation as Evidence of Due Diligence

Documentation is the primary mechanism for clarifying the chain of responsibility. The AI Act requires extensive documentation at every stage, and these documents become critical evidence in liability proceedings. The following are key documents that allocate and evidence responsibility:

Technical Documentation

Technical documentation must be kept up to date and include details on the system’s capabilities, limitations, intended purpose, data sources, risk management measures, and conformity assessment. For composite systems, it should clearly identify all third-party components and the provider’s due diligence in selecting and integrating them. If a provider can demonstrate that it conducted thorough due diligence on a third-party model, this may shift responsibility toward the model provider if the harm stems from a defect in that model.

Instructions for Use and Transparency Information

Providers must provide clear instructions for use, including information about the system’s limitations and the need for human oversight. Deployers who fail to follow these instructions may be considered negligent. Conversely, if the instructions are inadequate or misleading, the provider may bear responsibility. For example, if a provider fails to warn that a model performs poorly on certain demographic groups, and a deployer uses it in a context where that group is prevalent, the provider’s liability increases.

Conformity Assessments and Quality Management

High-risk AI systems must undergo conformity assessments before placement on the market. These assessments, conducted by the provider or a notified body, are formal evidence of compliance. In liability proceedings, a valid conformity certificate can serve as a strong defense, but it is not conclusive. If new evidence emerges that the system was unsafe, the conformity assessment can be challenged.

Post-Market Monitoring and Incident Reporting

Providers must operate a post-market monitoring system to collect experience from the deployed system and identify emerging risks. They must also report serious incidents to national authorities. This creates a continuous feedback loop. If a provider fails to act on reported incidents, they may be held liable for subsequent harm. Deployers play a key role here by reporting incidents to the provider, and their cooperation is essential for maintaining system safety.

Contracts and Service Level Agreements (SLAs)

While the AI Act and liability directives set statutory obligations, contracts between providers, integrators, and deployers are critical for allocating risk in practice. These agreements should specify:

  • Who is the provider of the final system?
  • What documentation will be provided by model suppliers?
  • Who is responsible for updates, patches, and monitoring?
  • What are the limitations on use, and who bears liability for misuse?
  • How will incidents be reported and investigated?

In the EU, parties have some freedom to allocate liability contractually, but they cannot contract out of mandatory legal obligations under the AI Act or evade liability for gross negligence or intentional harm.

National Implementations and Cross-Border Considerations

While the AI Act and liability directives are EU-wide regulations, their enforcement and interpretation will occur at the national level. Member States must designate market surveillance authorities, national competent authorities for GPAI, and courts for liability claims. This leads to potential divergence in how the rules are applied.

Designation of Authorities and Procedural Rules

Each Member State must designate a national authority to oversee the AI Act. For example, in Germany, the Federal Office for Information Security (BSI) is the central AI authority, while in France, it is the French National Agency for the Security of Information Systems (ANSSI). These authorities will have different enforcement priorities and interpretations of what constitutes a high-risk system or a systemic risk. This can create uncertainty for cross-border providers who must comply with multiple interpretations.

Procedural Rules for Liability Claims

The Liability Directive sets minimum standards for liability claims, but Member States can introduce stricter rules. For example, some countries may have stronger consumer protection laws or collective redress mechanisms. In the Netherlands, the Dutch Civil Code already includes provisions on strict liability for defective products, which may be applied to AI systems. In contrast, other jurisdictions may rely more on fault-based liability. This means that the outcome of a similar case could differ depending on where the victim files the claim.

Interaction with GDPR and Sector-Specific Laws

AI liability does not exist in a vacuum. It intersects with GDPR, which imposes strict rules on automated decision-making and data protection. A harm caused by an AI system may also constitute a GDPR violation, leading to parallel proceedings. Similarly, sector-specific regulations in finance (e.g., CRD, MiFID), healthcare (e.g., Medical Device Regulation), and transport (e.g., vehicle type approval) impose additional obligations. Compliance with these sectoral rules can be evidence of due diligence, but non-compliance can also be used to establish negligence in liability claims.

Practical Steps for Managing Shared Responsibility

For professionals working with AI in Europe, managing shared responsibility requires a proactive, documentation-driven approach. The following practices are recommended:

1. Map the Actor Chain

Clearly identify all parties involved in the development, integration, deployment, and operation of the AI system. Document their roles and responsibilities in contracts and internal records.

2. Conduct Due Diligence on Third-Party Components

Before using a third-party model or software component, verify that the supplier has complied with relevant regulations. Request technical documentation, conformity assessments, and evidence of a risk management system. Document this due diligence.

3. Maintain Comprehensive Documentation

Keep technical documentation, instructions for use, risk management records, and post-market monitoring reports up to date. Ensure that documentation is accessible to deployers and, where necessary, to authorities.

4. Implement Robust Incident Reporting

Establish clear procedures for reporting incidents both internally and to the provider. Ensure that deployers understand their reporting obligations and are not discouraged from reporting by fear of liability.

5. Review and Update Contracts

Work with legal counsel to ensure that contracts reflect the AI Act’s obligations and allocate risk appropriately. Pay special attention to clauses on updates, modifications, and limitations of use.

6. Monitor Regulatory Developments

Stay informed about national implementing laws, guidance from authorities, and court decisions. The interpretation of the AI Act and liability directives will evolve, and early adopters of best practices will be better positioned to manage liability.

Shared responsibility in AI is not a problem to be solved but a reality to be managed. The European regulatory framework provides tools for this management, but their effectiveness depends on rigorous documentation, clear communication, and a culture of safety. For providers, integrators, deployers, and operators, the path forward is to treat compliance not as a checkbox but as an ongoing process of evidence-based risk management. In

Table of Contents
Go to Top