Transparency in Public AI Systems: Notices and Contestability
Transparency is not a single feature to be bolted onto an artificial intelligence system; it is a design and governance discipline that shapes how a system is conceived, deployed, and contested. In public sector contexts—where automated decisions can affect access to benefits, housing, policing, healthcare, and education—transparency becomes the operational backbone of due process and democratic accountability. European legal frameworks have moved from aspirational principles to enforceable obligations, and the practical task for institutions is to translate those obligations into verifiable engineering and procedural practices. This article explains what transparency looks like in public AI systems, focusing on notices, explanations, contestability, and record-keeping, while distinguishing between EU-level regulation and national implementations.
At the core of the European approach lies a layered concept of transparency that includes information duties before processing, meaningful information at the point of decision, the right to obtain explanations after automated decisions, and the ability to contest outcomes effectively. The General Data Protection Regulation (GDPR) provides the baseline for data protection, while the Law Enforcement Directive (LED) sets specific rules for policing and criminal justice. The forthcoming Artificial Intelligence Act (AI Act) introduces horizontal obligations for high-risk AI systems, including transparency requirements that intersect with GDPR’s explainability rights. Sectoral rules such as the Social Services Directive and national public law principles add further constraints, particularly when public bodies use AI to allocate services or enforce regulations.
Legal Foundations: GDPR, LED, and the AI Act
The GDPR anchors transparency in Articles 12 to 15, which require clear, concise, and easily accessible notices about processing purposes, data categories, recipients, retention periods, and the existence of automated decision-making. Article 22 protects individuals from being subject to a decision based solely on automated processing, including profiling, that produces legal effects or similarly significantly affects them. Crucially, Article 22(3) grants the data subject the right to human intervention, to express their point of view, and to contest the decision. Article 13(2)(f) and 14(2)(g) require that controllers inform data subjects about the existence of automated decision-making, including the logic involved and the envisaged consequences. Recital 71 emphasizes the need for “meaningful information about the logic involved,” which is the seed of explainability obligations.
In the law enforcement context, the LED mirrors these concepts but adapts them to the realities of policing and criminal justice. Articles 10–12 impose documentation and information duties; Article 13 provides a right to access and rectification; and Article 14 establishes safeguards against solely automated decisions with legal or similarly significant effects. Member States may introduce specific limitations or conditions, but the core of human oversight and contestability remains.
The AI Act, adopted in 2024 and applicable in stages from 2025 to 2027, adds a horizontal transparency layer. Public authorities deploying high-risk AI systems must comply with conformity assessments, risk management, data governance, logging, and transparency obligations. For high-risk systems used by public bodies, Article 13 requires that outputs are interpretable and that users are informed they are interacting with an AI system. The Act’s documentation obligations (Article 11) and logging requirements (Article 12) are designed to enable post-hoc audits and to support the exercise of rights under the GDPR and LED. The AI Act does not replace GDPR; it complements it. A public sector AI system must satisfy both regimes simultaneously: the AI Act’s technical governance and the GDPR’s individual rights.
What “Transparency” Means in Practice
Transparency in public AI systems is not simply about publishing a privacy notice. It is a continuum that spans pre-deployment communication, real-time notifications, post-decision explanations, and ongoing auditability. In practice, it involves:
- Pre-deployment notices that describe the system’s purpose, data sources, performance metrics, known limitations, and oversight arrangements.
- Point-of-decision notifications that inform individuals when an AI system is being used, that a decision may be automated, and how to seek human review.
- Post-decision explanations that convey the principal reasons for a decision in a manner that is comprehensible and useful for contesting the outcome.
- Record-keeping and logging that preserves the decision pathway, inputs, model version, and human oversight actions for audit and redress.
These elements must be tailored to the context. A social benefits eligibility algorithm requires different explainability techniques than a predictive policing tool or a biometric identification system. The GDPR’s standard of “meaningful information about the logic involved” does not require disclosure of proprietary algorithms or trade secrets; it requires a clear articulation of the factors that drove the decision, their relative importance, and the range of possible outcomes. The AI Act reinforces this by requiring that high-risk systems enable traceability and human oversight, which in turn supports the provision of explanations.
Distinguishing EU-Level and National Implementation
While the GDPR and AI Act set EU-wide baselines, national implementations matter. Member States have discretion in several areas:
- Article 22 carve-outs: Some countries allow purely automated decisions in specific contexts (e.g., tax assessments) subject to safeguards, while others impose stricter limits.
- Public law principles: National administrative codes often require reasoned decisions, public consultation for automated tools, and impact assessments before deployment.
- Supervisory authorities: Data Protection Authorities (DPAs) and sectoral regulators interpret obligations differently. For example, the French CNIL has issued detailed guidance on explainability and profiling, while the UK ICO has focused on fairness and transparency in automated decision-making.
- Law enforcement frameworks: LED implementations vary, with some Member States imposing additional oversight for AI used in policing and others integrating it into existing judicial review mechanisms.
Practically, this means that a public body operating across regions must map its AI systems to both EU-level obligations and local administrative and data protection rules. It also means that transparency artifacts (notices, explanations, audit logs) must be designed to satisfy the strictest applicable standard to ensure cross-border compliance.
Notices: What to Communicate and When
Effective notices are not generic boilerplate; they are context-specific communications that prepare individuals for the possibility of automated decision-making and inform them of their rights. The GDPR requires privacy notices at the point of data collection, but public AI systems often involve data collected for one purpose and reused for another. In such cases, re-noticing may be necessary when the AI system is introduced, especially if the new processing is likely to produce legal or similarly significant effects.
Pre-Deployment Transparency
Before deploying a high-risk AI system, public bodies should publish a system card or public impact statement that includes:
- The system’s objective and scope, including the specific administrative decision it supports.
- Data provenance: sources, categories, quality controls, and steps taken to address bias or representativeness.
- Model characteristics: type of model, training approach, performance metrics, known limitations, and uncertainty levels.
- Human oversight: roles, responsibilities, and intervention thresholds.
- Redress mechanisms: how to request human review, timelines, and escalation paths.
- Audit and compliance: references to DPIAs, AI risk management documentation, and logs.
Such disclosures build public trust and enable civil society oversight. They also align with the AI Act’s obligations on user information and with public procurement requirements that increasingly mandate transparency and accountability clauses.
Point-of-Decision Notices
When an automated or AI-assisted decision is made, the individual must be informed clearly and in plain language. Best practices include:
- Explicit statement that an automated system was used and that a human may review the decision upon request.
- Summary of the main factors considered by the system, in non-technical terms.
- Clear instructions on how to contest the decision, including contact points and deadlines.
- Accessible formats, including multilingual and accessible versions for persons with disabilities.
For high-risk systems, the AI Act requires that the system’s intended purpose and capabilities are understood by users; this includes ensuring that recipients of decisions are not left in the dark about the role of AI in shaping outcomes.
Special Cases: Policing and Social Services
In law enforcement, full disclosure may be limited to avoid compromising investigations. However, the LED requires that individuals be informed of the existence of processing and their rights, unless a Member State law provides an exception. In social services, where decisions affect livelihoods, transparency must be maximal. Some countries require public registers of automated decision systems used in welfare administration, while others rely on case-by-case notifications. The common denominator is that individuals must know when AI is used and how to seek human review.
Explainability: From “Logic Involved” to Operational Practice
Explainability is the technical and procedural capability to provide reasons for a decision that are understandable to the data subject and useful for redress. The GDPR does not mandate a specific technique; it mandates a result: meaningful information about the logic involved. The AI Act complements this by requiring that high-risk systems enable traceability and that outputs are interpretable.
What Constitutes a “Meaningful Explanation”
A meaningful explanation typically answers three questions:
- Why this outcome? The principal reasons, expressed in terms of the decision’s criteria (e.g., income threshold, risk score, eligibility rule).
- What data mattered? The key inputs that influenced the decision, with sensitivity to personal circumstances.
- What could change? The range of alternative outcomes and the conditions under which the decision would differ.
Importantly, explanations need not reveal proprietary model details. They should avoid jargon and provide actionable information. For example, in a benefits eligibility decision, an explanation might state: “Your application was declined because your reported income exceeded the threshold for this program and your household size did not qualify for an exception. The decision was based on income data from the tax authority and household composition from the civil registry. If your income has changed or your household composition is incorrect, you may submit updated documentation for human review.”
Technical Approaches and Their Limits
Organizations use a mix of techniques to generate explanations:
- Transparent models: Decision trees, linear models, or rule-based systems where the logic is inherently interpretable.
- Post-hoc methods: Feature importance scores (e.g., SHAP values), counterfactuals (“If income were X, the outcome would be Y”), and local approximations.
- Hybrid designs: AI systems that flag cases for human review based on thresholds or uncertainty, with explanations prepared for review.
Each approach has trade-offs. Transparent models may be less accurate for complex patterns. Post-hoc explanations can be misleading if not carefully validated. Counterfactuals are powerful for contestability but require accurate causal modeling. The AI Act’s emphasis on risk management and data governance encourages selecting the simplest model that meets performance needs and ensuring that explanations are tested with real users.
Human Intervention as Explanation
In many public sector contexts, the most practical form of explanation is human review. Article 22(3) GDPR and Article 14 LED grant the right to human intervention. This means that a qualified official must be able to understand the system’s output, access the underlying data and logs, and make an independent decision. The explanation, in this sense, is the process: the official’s ability to articulate reasons informed by the system’s outputs. Public bodies should therefore design workflows where human reviewers receive structured summaries of the AI’s reasoning and are trained to interrogate model outputs.
Contestability: Designing Effective Redress
Contestability is the ability of an individual to challenge an automated decision and obtain a timely, meaningful remedy. It is the practical expression of the right to object and the right to human review. Contestability requires more than a complaint form; it requires a pathway that integrates technical and procedural elements.
Procedural Requirements
Effective contestability includes:
- Clear access points: Dedicated channels for contesting AI-driven decisions, with simple forms and accessible guidance.
- Reasonable timelines: Deadlines for submitting evidence and for receiving a review outcome, aligned with administrative law.
- Information rights: The ability to obtain the explanation and the underlying data used in the decision, subject to lawful limitations.
- Escalation paths: Options to escalate to an independent authority or ombudsperson if the initial review is unsatisfactory.
From a technical standpoint, contestability depends on logging and versioning. If a decision is contested, the institution must be able to reconstruct which model version was used, which inputs were considered, and what human actions were taken. The AI Act’s logging requirement (Article 12) is designed precisely for this purpose.
Interplay with Other Rights
Contestability often intersects with the right to rectification (Article 16 GDPR) and the right to erasure (Article 17 GDPR). If an individual shows that the data used by the AI system was inaccurate, the controller must rectify it and, where appropriate, reassess the decision. In policing contexts, rectification rights may be limited by the integrity of investigations, but the LED still provides avenues for review. Public bodies should establish clear policies on how data quality disputes are handled and how they affect automated outcomes.
Comparative Approaches in Europe
Practices vary. Some countries have introduced algorithm registers that list automated systems used by public bodies, their purposes, and oversight mechanisms. Others rely on sectoral regulators to audit AI systems and provide complaint mechanisms. In some jurisdictions, administrative courts have developed specialized procedures for challenging automated decisions, including the ability to request disclosure of system documentation. The common thread is that contestability must be practical, not theoretical. Individuals must be able to understand how to challenge a decision and see that their challenge has been considered by a competent human.
Record-Keeping and Auditability
Record-keeping is the backbone of transparency. Without reliable logs, explanations cannot be verified, and contestability becomes a formality. The GDPR requires controllers to maintain records of processing activities (Article 30). The LED has similar documentation duties. The AI Act adds specific requirements for high-risk systems: logging of events throughout the lifecycle to enable traceability and post-market surveillance.
What to Log
For public AI systems, logs should capture:
- System identity: Model version, configuration, and release date.
- Inputs: Data categories used for the decision, with timestamps and sources (without storing excessive personal data).
- Outputs: The decision or recommendation, confidence scores, and any thresholds applied.
- Human actions: Who reviewed the decision, what changes were made, and the reasons for those changes.
- Incidents: Errors, overrides, and anomalies detected during operation.
Logs must be protected against tampering and retained for the periods required by administrative law and sectoral rules. They should be accessible to internal audit, DPAs, and, where appropriate, to data subjects exercising their rights.
From Logs to Audits
Auditability requires more than storage; it requires structure. Public bodies should implement:
- Version control: Clear procedures for model updates, with rollback capabilities and change logs.
- Access controls: Role-based access to logs and system documentation, with immutable audit trails.
- Testing and validation: Pre-deployment and periodic performance testing, including bias and robustness assessments.
- Post-market surveillance: Ongoing monitoring for drift, performance degradation, and unintended effects, as required by the AI Act.
These practices support both internal governance and external oversight. They also enable public bodies to demonstrate compliance with the principle of accountability.
Technical and Organizational Measures
Transparency is implemented through a combination of technical design and organizational governance. The following measures are particularly relevant for public sector AI.
Data Governance
High-quality, representative data is a prerequisite for fair and explainable decisions. Public bodies should:
- Document data sources, collection methods, and cleaning procedures.
- Assess representativeness and potential biases, especially for vulnerable groups.
- Implement data minimization and purpose limitation, ensuring that only necessary data is used.
- Establish procedures for data subject access and rectification.
Model Selection and Validation
Choosing the right model is a transparency decision. Whenever possible, prefer interpretable models. If complex models are necessary, use techniques that provide faithful explanations and validate them with end users. Conduct fairness assessments and stress tests to understand how the model behaves under different conditions.
User Interface and Communication
Transparency is only effective if users understand it. Notices and explanations should be:
- Written in plain language, avoiding technical jargon.
- Accessible, following WCAG guidelines and offering alternative formats.
- Contextual, delivered at the right moment in the user journey.
- Actionable, clearly indicating how to seek review or correct data.
Training and Accountability
Staff must understand the systems they oversee. Training should cover:
- The legal obligations under GDPR, LED, and the AI Act.
- How to interpret model outputs and explanations.
<
