How Legal Precedent Shapes AI Guidance and Practice
European institutions frequently debate how to govern artificial intelligence, yet the most influential rules often emerge not from new statutes but from the steady accumulation of legal precedent and administrative guidance. In the European Union’s legal order, precedent does not operate as it does in common law systems, where judicial decisions create binding rules for lower courts. Instead, a complex interplay of case law from the Court of Justice of the European Union (CJEU), national adjudicatory bodies, data protection authorities, and sector-specific regulators shapes how AI is interpreted under existing frameworks. This process influences procurement specifications, institutional governance policies, and product conformity assessments long before a new regulation enters into force. Understanding this dynamic is essential for any professional deploying AI in the public sector, healthcare, finance, or critical infrastructure.
How EU Case Law Creates De Facto AI Rules
The CJEU interprets directives and regulations that were not written with modern AI in mind. Its rulings on data protection, automated decision-making, and liability create interpretive constraints that directly affect AI systems. The General Data Protection Regulation (GDPR) is a prime example. Although it does not mention “AI” explicitly, its provisions on automated individual decisions (Article 22) and the right to explanation (recital 71) have been the subject of significant judicial and scholarly debate. National courts have referred questions to the CJEU about what constitutes “automated decision-making” when machine learning models influence outcomes such as credit scoring or social benefit eligibility.
When a national court asks whether a credit model that outputs a risk score is an “automated decision” under Article 22, the CJEU’s answer will define the scope of human oversight and the meaningful information that must be provided to data subjects. Even before a ruling, regulators and public buyers adjust their practices. For instance, the European Data Protection Board (EDPB) has issued guidance interpreting “decision based solely on automated processing” to include situations where a human review is a mere formality. This interpretation pushes public authorities to design review workflows that are genuinely substantive, not rubber-stamp approvals. Procurement teams incorporate these expectations into tender documents, requiring vendors to demonstrate how human operators can intervene and override model outputs.
Another critical area is the concept of joint controllership under GDPR. In the context of AI, this arises when a public hospital uses a third-party diagnostic tool that processes patient images. The CJEU’s expansive view of joint controllership, as articulated in cases like Fashion ID and Jehovah’s Witnesses, suggests that both the provider of the AI system and the deploying institution may share responsibility for data processing. This interpretation influences data processing agreements and liability clauses. Hospitals now require AI vendors to accept contractual responsibility for data protection impacts, and they document governance measures to demonstrate compliance with the principle of “data protection by design and by default.”
Automated Decision-Making and the Right to a Human Review
The right to obtain human intervention and to contest a decision is not merely a procedural formality; it is a substantive safeguard. National courts have clarified that “human intervention” must be capable of examining the merits of the decision and reversing it. This has practical consequences for system design. An AI tool that provides a risk score to a caseworker, accompanied only by a brief summary of factors, may not meet the standard if the caseworker cannot access the underlying logic or challenge the model’s output effectively.
Administrative guidance in several Member States now requires that the human reviewer be trained, have access to the model’s documentation, and be empowered to deviate from the recommendation. In the Netherlands, for example, the Dutch Data Protection Authority (AP) has scrutinized automated benefit fraud detection systems and found that the human review was insufficient because the caseworker lacked the technical means to interrogate the model. This precedent has led to procurement language that mandates explainable AI features, audit logs, and the ability to simulate alternative outcomes. It also informs institutional governance: agencies must establish internal appeals procedures and record-keeping practices that capture the rationale for overriding or following an AI recommendation.
Profiling and Discrimination: Indirect Evidence and Burden Shifting
Discrimination law in the EU relies on the principle of indirect discrimination, where a seemingly neutral practice disproportionately affects a protected group. Proving such discrimination in AI systems is challenging because the model’s internal logic is often opaque. Courts and equality bodies have developed evidentiary approaches that shift the burden of proof to the data controller once a prima facie case is established. This approach is informed by CJEU case law on burden shifting in discrimination cases.
As a result, public authorities using AI for recruitment or policing must be prepared to provide statistical evidence that their models do not produce discriminatory outcomes. They must also document measures taken to mitigate bias, such as dataset audits, fairness metrics, and post-deployment monitoring. Procurement contracts increasingly require vendors to supply model cards and datasheets for datasets, enabling the contracting authority to assess potential discrimination risks before deployment. This is not merely good practice; it is a way to manage legal exposure in the event of a challenge.
Guidance Documents as Soft Law with Hard Consequences
Guidance from EU bodies and national regulators is not legally binding, but it carries significant weight. Courts refer to guidance when interpreting obligations, and public buyers use it to structure tenders. The European Commission’s “Guidance on the procurement of AI systems in the public sector” is a foundational document that sets out expectations for transparency, human oversight, data governance, and technical robustness. It emphasizes that public authorities should avoid “black box” systems and ensure that they retain meaningful control over the technology.
This guidance has been operationalized in national frameworks. In France, the national digital authority (CNLL) and the public procurement agency have issued recommendations that echo the Commission’s guidance, adding specific requirements for algorithmic transparency in public services. In Germany, the Federal Office for Information Security (BSI) has published standards for AI security that reference existing IT security laws and guidance, shaping how public bodies assess the cybersecurity risks of AI systems. These documents do not create new legal obligations, but they crystallize interpretations of existing law and provide a benchmark for compliance.
EDPB and National DPAs: Shaping AI through Enforcement and Opinions
The EDPB and national data protection authorities have been particularly active in shaping AI practice. The EDPB has adopted opinions on the interplay between GDPR and AI, focusing on data minimization, purpose limitation, and the lawfulness of processing. National DPAs have issued decisions on specific AI deployments, such as facial recognition in public spaces or automated credit scoring. These decisions serve as practical precedents for other authorities and private entities.
For instance, the Swedish DPA fined a school for using facial recognition to track attendance, finding that the processing was not necessary for the performance of a task in the public interest and that less intrusive means were available. This decision clarified the necessity test under Article 6(1)(e) GDPR and influenced how other public bodies evaluate biometric systems. Similarly, the UK Information Commissioner’s Office (before Brexit) and subsequently the Irish Data Protection Commission have examined ad-tech and real-time bidding systems, concluding that certain AI-driven profiling practices violate GDPR. These enforcement actions guide procurement teams to avoid technologies that rely on non-compliant data processing.
Algorithmic Impact Assessments and the ALTAI Framework
The European Commission’s “Assessment List for Trustworthy AI” (ALTAI) is a non-binding tool that helps organizations evaluate AI systems against the requirements of the High-Level Expert Group on AI. While not a legal instrument, ALTAI has become a de facto standard in public procurement and institutional governance. Many contracting authorities require vendors to complete an ALTAI-based assessment as part of the tender process. This practice embeds ethical and legal considerations into procurement criteria, such as robustness, fairness, and transparency.
Some Member States have gone further by mandating algorithmic impact assessments in specific domains. In the Netherlands, the Dutch Algorithm Register encourages public bodies to document the purpose, data sources, and risks of algorithms used in public decision-making. Although participation is voluntary, the register increases transparency and accountability. It also creates a repository of practices that courts and regulators can reference when assessing whether a public authority has met its obligations under GDPR and national administrative law.
Liability and Product Safety: Anticipating the AI Liability Directive
Liability for AI-related harm is governed by existing frameworks such as the Product Liability Directive (PLD), the Unfair Commercial Practices Directive, and national tort law. The European Commission has proposed an AI Liability Directive to complement the PLD and to address specific challenges posed by AI, such as fault attribution and the difficulty of proving causality. Until this directive is adopted and implemented, courts and regulators rely on current law and interpret it in light of AI characteristics.
Under the current PLD, a product is defective if it does not provide the safety that a person is entitled to expect. Courts consider factors such as the presentation of the product, the reasonably expected use, and the state of the art at the time of placing on the market. In the context of AI, the “state of the art” is dynamic. A model that was considered safe when deployed may become inadequate as new adversarial techniques emerge. This evolving standard influences how public authorities manage ongoing conformity assessments and incident reporting. Procurement contracts often include clauses requiring vendors to update models to address new threats and to notify the contracting authority of any safety issues.
The proposed AI Liability Directive introduces a presumption of causality and a right to disclosure of evidence, making it easier for claimants to seek redress. Even before adoption, the direction of travel is clear: providers and deployers will need to maintain detailed technical documentation and risk management records. Public buyers are already incorporating these expectations into governance frameworks, requiring vendors to provide access to logs, model versions, and risk assessments in the event of an incident.
Strict Liability and High-Risk AI under the AI Act
The AI Act introduces harmonised rules for high-risk AI systems, including conformity assessments, risk management systems, and post-market monitoring. While the AI Act is a regulation and directly applicable, its implementation will rely on guidance and case law. The concept of “high-risk” is defined in Annex III but is subject to review and possible expansion. National market surveillance authorities will interpret the boundaries, and their decisions will set precedents for what constitutes high-risk in specific sectors.
For example, AI systems used in critical infrastructure, education, employment, and law enforcement are presumed high-risk. However, a system that supports HR recruitment by filtering resumes may be considered high-risk if it influences access to employment. The interpretation of “influences access to employment” will be shaped by guidance from the European Commission and enforcement actions by national authorities. Public procurement teams must therefore assess whether an AI tool falls within the high-risk category and ensure that the vendor complies with the relevant obligations, including the use of high-quality training data, human oversight, and robustness testing.
Comparative Approaches across Europe
While EU-level regulations provide a harmonised baseline, national implementations vary. In Germany, the Act on the Regulation of Artificial Intelligence (KI-Gesetz) is expected to align with the AI Act but may introduce additional requirements for public sector deployments, particularly around transparency and citizen rights. The German approach emphasizes the “right to explanation” in administrative law, requiring public bodies to provide meaningful information about algorithmic decisions.
In France, the “Loi pour une République numérique” includes provisions on algorithmic transparency and the right to access algorithmic logic used in public decision-making. This national law predates the AI Act and has already influenced how public authorities document and explain AI systems. In Spain, the Catalan region has introduced an algorithmic registry for public sector AI, creating a model for transparency that other regions may adopt. These national and regional precedents inform how the AI Act will be applied in practice and provide a rich source of guidance for public buyers.
Procurement Language: Translating Precedent into Contractual Obligations
Public procurement is a key mechanism for embedding legal and ethical standards into AI deployments. Contracting authorities use technical specifications, award criteria, and contract performance clauses to require compliance with GDPR, the AI Act, and guidance from regulators. Precedent shapes these documents by highlighting risks and best practices.
For instance, following enforcement actions on discriminatory algorithms in benefits administration, many public bodies now include specific clauses on bias testing and mitigation. A typical specification may require the vendor to provide evidence of dataset representativeness, perform fairness audits across protected characteristics, and implement continuous monitoring for drift. Award criteria may allocate points for explainability features and the ability to provide counterfactual explanations to affected individuals.
Another area influenced by precedent is data governance. The CJEU’s rulings on data minimization and purpose limitation have led to requirements that AI systems use only the data necessary for the intended purpose and that data retention periods be strictly defined. Procurement contracts often include data processing agreements that mirror GDPR obligations, with explicit provisions for security measures, breach notification, and cooperation with supervisory authorities.
Human oversight is also a central theme. Drawing on DPA decisions that found human review to be insufficient, procurement language now specifies the qualifications of human reviewers, the level of access they must have to model outputs, and the documentation required for their decisions. Some contracts require that the AI system provide an “override” function that logs when a human deviates from the model’s recommendation and captures the rationale.
Open-Source Models and Vendor Lock-In
Precedent also influences decisions around open-source versus proprietary models. Public authorities are increasingly interested in open-source AI to ensure transparency and avoid vendor lock-in. However, using open-source models raises questions about liability and support. Procurement teams must assess whether the model is maintained, whether security patches are available, and whether the license permits commercial use and modification.
Guidance from national digital agencies often recommends that public bodies prefer open standards and open-source solutions where feasible. In practice, this means that vendors may need to provide model weights, training code, or at least detailed documentation to enable independent auditing. The precedent set by earlier projects that failed due to opaque vendor offerings has led to more stringent requirements for technical transparency and exit strategies.
Incident Reporting and Post-Market Monitoring
The AI Act introduces obligations for post-market monitoring, but public buyers have already begun to incorporate incident reporting clauses into contracts. Drawing on experience from product safety and cybersecurity domains, these clauses define what constitutes an incident, the timelines for reporting, and the responsibilities for remediation. For example, a vendor may be required to notify the contracting authority within 24 hours of discovering a vulnerability that could affect the system’s safety or performance.
Precedent from data breach notifications under GDPR has shaped these clauses, emphasizing the importance of timely communication and cooperation. Public authorities also require vendors to participate in joint incident response exercises and to maintain a dedicated security contact. These practices ensure that AI systems are managed with the same rigor as other critical IT systems.
Institutional Governance: Building Internal Capacity and Oversight
Public authorities are not passive recipients of AI technology; they must establish governance structures that ensure lawful and ethical use. Precedent from enforcement actions and guidance documents informs the design of these structures. Many institutions have created AI ethics committees, data protection officers with technical expertise, and internal audit functions that review algorithmic systems.
Internal governance policies often require an “AI impact assessment” before deployment. This assessment considers data protection, fundamental rights, security, and operational risks. It is analogous to a Data Protection Impact Assessment (DPIA) but broader. The process involves consultation with legal, technical, and operational stakeholders and may include external expert review. The outcome is a documented rationale for deployment, including risk mitigation measures and a plan for ongoing monitoring.
Training is another critical element. Courts and regulators have indicated that lack of training can be a factor in determining liability. Public bodies now invest in training programs for staff who manage or use AI systems, covering topics such as data protection, bias detection, and the limitations of AI. This helps ensure that human oversight is meaningful and that decisions are defensible.
Algorithmic Registers and Transparency
Transparency is a recurring theme in precedent. The French algorithmic register and the Dutch Algorithm Register provide models for documenting AI systems used in public decision-making. These registers typically include information about the purpose of the system, the data used, the logic involved, and the risks identified. They enable public scrutiny and facilitate compliance with transparency obligations under national law and GDPR.
Transparency registers also serve as a governance tool within the institution. They encourage teams to think carefully about the justification for using AI and to document their decisions. In the event of a legal challenge, the register can demonstrate that the authority considered the relevant factors and followed a structured process.
Human Rights and Proportionality Assessments
Public authorities using AI in areas such as policing or migration control must assess proportionality under the European Convention on Human Rights (ECHR) and the EU Charter of Fundamental Rights. Case law from the European Court of Human Rights (ECtHR) and the CJEU sets out the requirements for necessity and proportionality. For example, the use of facial recognition in public spaces has been found to be a serious interference with privacy and must be strictly necessary and proportionate to a legitimate aim.
These precedents guide institutional governance by requiring a documented justification for any AI system that affects fundamental rights. The assessment must consider less intrusive alternatives, the scope of data collection, and safeguards against abuse. Procurement specifications often reflect this by requiring vendors to provide evidence that their solution is the least intrusive option that meets the operational need.
Interplay between EU Regulations and National Law
Understanding the relationship between EU-level regulations and national law is crucial for effective governance. The AI Act and GDPR are directly applicable, but Member States have discretion in certain areas, such as specifying competent authorities, setting penalties, and implementing complementary national rules. National courts interpret EU law in the context of domestic legal traditions, creating a layer of national precedent that influences how EU rules are applied.
For example, GDPR allows Member States to set specific rules for processing in the public interest. Germany has used this flexibility to adopt provisions on data processing by public bodies that complement GDPR. These national rules affect how AI systems are designed and deployed, particularly in areas like social security and law enforcement. Public authorities must therefore be familiar with both EU-level obligations and the relevant national legislation.
Similarly, the AI Act leaves certain details to implementing acts and harmonised standards. National market surveillance authorities will play a key role in enforcement and interpretation. Their decisions will create a body of practice that informs how the regulation is applied in different sectors. Public buyers should monitor guidance from national authorities and adapt their procurement and governance practices accordingly.
Regulatory Sandboxes and Innovation Support
The AI Act encourages the establishment of regulatory sandboxes, controlled environments where AI systems can be tested under supervision
