Transparency Duties: Explaining AI Without Misleading Users
Transparency is not a courtesy; it is a legal duty and a core governance principle that determines whether an AI system is lawful, trustworthy, and deployable at scale. In the European legal order, transparency duties arise from multiple sources and interact in nuanced ways. The General Data Protection Regulation (GDPR) sets a baseline for explaining personal data processing, while sectoral rules such as the AI Act introduce specific disclosure obligations for certain AI systems. Good governance practice, informed by ethics and risk management, goes beyond the minimum to prevent misleading or incomplete explanations that can create legal exposure and reputational harm. This article dissects these duties in practice, distinguishing EU-level requirements from national implementations, and offers concrete examples of notices that are clear, honest, and compliant.
At its core, transparency in AI is about enabling individuals to understand what is happening, why it is happening, and how to exercise their rights. It is also about enabling regulators and counterparties to audit and verify that systems operate as claimed. The concept is not limited to a single document or interface notice; it encompasses the entire lifecycle of information provision, from pre-contractual disclosures to real-time explanations during use, and post-use explanations such as meaningful information about the logic involved in automated decision-making. The following sections unpack the legal foundations, practical obligations, and common pitfalls, with a focus on personal data processing and automated decisions, and extend the lens to the AI Act’s transparency requirements for high-risk and prohibited practices.
Legal Foundations of Transparency in European Law
Transparency duties in AI-related processing are anchored primarily in the GDPR, but they are complemented by the AI Act, the Digital Services Act (DSA), the Digital Markets Act (DMA), and national public law principles such as good administration. The GDPR’s Articles 12 to 14 set the baseline for providing information to data subjects in a concise, transparent, intelligible, and easily accessible form, using clear and plain language. Article 13 requires information to be provided at the point where personal data are collected (including from controllers other than the original source in certain cases), while Article 14 covers collection from sources other than the data subject. Article 15 grants the right of access, enabling individuals to obtain confirmation of processing and meaningful information about the logic involved, as well as the significance and envisaged consequences.
Article 22 GDPR establishes a qualified right not to be subject to solely automated decisions that produce legal effects or similarly significantly affect a person. Where such processing occurs, the controller must implement suitable measures to safeguard rights and freedoms, including the right to obtain human intervention, to express one’s point of view, and to contest the decision. Importantly, Article 22(1) GDPR prohibits solely automated decision-making with legal or similarly significant effects unless one of the exceptions in Article 22(2) applies (consent, contract, or Union/Member State law). The duty to explain is not a standalone “right to an explanation” in the GDPR text itself, but it emerges from the interplay of Articles 13–15 and 22, and from the accountability principle in Article 5(2). The European Data Protection Board (EDPB) has clarified that meaningful information about the logic involved should be provided, though not necessarily in a way that reveals proprietary algorithms or trade secrets.
At the EU level, the AI Act harmonizes specific transparency obligations for certain AI systems, independent of whether personal data is processed. For high-risk AI systems, it mandates documentation, conformity assessments, and user information duties. For limited-risk systems such as emotion recognition or biometric categorization, and for systems interacting with individuals (like chatbots), the AI Act imposes disclosure obligations so that individuals know they are interacting with an AI. For prohibited practices (e.g., subliminal techniques, untargeted scraping for facial recognition, emotion recognition in workplaces and educational institutions, social scoring by public authorities), transparency is not enough; the practice itself is banned. The DSA and DMA also impose transparency duties on platforms and gatekeepers, which can intersect with AI-driven content curation and recommender systems. National implementations may add procedural specifics, such as the role of data protection authorities (DPAs) and sectoral regulators, and the remedies available to individuals.
Transparency Under GDPR: From Notices to Meaningful Information
GDPR transparency is not a one-off exercise. It begins with pre-contractual information where AI is used to evaluate individuals (e.g., recruitment screening, credit scoring, health risk assessment). Controllers must provide the information listed in Articles 13 and 14 at the time of data collection. In practice, this means that if an AI system processes personal data to produce an output that affects an individual, the individual must be informed about the categories of personal data used, the purposes, the recipients, retention periods, the existence of profiling or automated decision-making, and the logic involved, as well as how to exercise rights. The information must be concise and intelligible; burying key details in long privacy policies is not compliant if the relevant information is not prominent or understandable at the point of use.
When an AI system makes or supports decisions that are solely automated and legally or significantly impactful, the GDPR requires specific safeguards. The controller must inform the data subject about the existence of such processing and the means to exercise rights under Article 22. In addition, upon request, the controller must provide meaningful information about the logic involved (Article 15(1)(h)). This is where many organizations struggle: how to explain a model without revealing IP, and how to do so in a way that is accurate and not misleading. The EDPB guidance suggests a layered approach: provide a clear high-level explanation of the main factors and decision drivers, indicate the role of automation, and explain the consequences for the individual. Avoid implying certainty or infallibility where the system is probabilistic. Avoid overstating the system’s capabilities or the independence of human review if it is limited or rubber-stamped.
Transparency is also linked to the principle of fairness. Misleading explanations can be unfair because they distort the individual’s understanding and ability to challenge outcomes. For example, stating that “the decision was made by a neutral algorithm” when the model relies on proxies for protected characteristics, or claiming that “human review is always applied” when it is cursory, can mislead and violate fairness. Controllers should be prepared to explain, in non-technical terms, what data categories were used, which factors were most influential, and whether the decision was based on probabilistic classification rather than deterministic facts. They should also inform individuals about the possibility and process to contest decisions and seek human intervention.
Information Requirements at the Point of Collection
Articles 13 and 14 require specific items, including the identity of the controller, purposes and legal basis, legitimate interests (if applicable), recipients or categories of recipients, retention periods, and the existence of profiling or automated decision-making. When AI is used to infer sensitive data (e.g., health status, political opinions) from non-sensitive data, controllers must be careful. Article 9 GDPR prohibits processing of special categories of personal data unless an exception applies. Inference of sensitive data using AI can be considered processing of special categories, triggering strict requirements. Transparency notices should therefore disclose when such inferences occur and the legal basis relied upon, if any.
Practical tip: if an AI system uses third-party models or APIs (e.g., cloud-based language models or biometric analysis), the notice must identify the categories of recipients, including the third-party provider, and explain cross-border data transfers and safeguards. If the provider acts as a processor, the controller must ensure a contract under Article 28 and inform the data subject accordingly. If the AI system is embedded in a broader service, the notice should be contextual and appear before the user engages the feature, not hidden in a separate menu.
Meaningful Information About the Logic Involved
Article 15(1)(h) grants the right to obtain “meaningful information about the logic involved” in automated decisions. This is not a right to receive source code or detailed model parameters. Rather, it is a right to an explanation that allows the individual to understand the decision’s rationale and to challenge it effectively. In practice, this includes:
- Inputs and categories: what types of data were used (e.g., employment history, credit repayment records, health indicators).
- Key decision drivers: which factors were most influential (e.g., number of late payments, distance to workplace, self-reported symptoms).
- Decision type: classification (e.g., “high risk” vs “low risk”), regression (e.g., predicted premium), or ranking (e.g., priority order).
- Uncertainty and thresholds: whether the decision is probabilistic and what confidence thresholds were used.
- Human involvement: the nature and extent of human review and how to request it.
- Consequences: what the decision means for the individual and what actions they can take.
Organizations should prepare standardized explanation templates that can be customized per use case. These templates should be validated by legal, compliance, and technical teams to ensure accuracy and avoid misleading statements. It is prudent to document the explanation process itself, including how explanations are generated and reviewed, to demonstrate accountability.
Prohibition and Exceptions for Solely Automated Decisions
As noted, Article 22(1) prohibits solely automated decisions with legal or similarly significant effects unless an exception applies. The concept of “similarly significant effects” is context-dependent. A rejection of a job application may be significant; a minor discount in a loyalty program likely is not. DPAs provide sector-specific guidance. For example, the UK ICO (pre-Brexit) and current EDPB members have indicated that decisions that affect access to essential services, employment opportunities, or financial standing are likely to be covered. Controllers must carefully map AI use cases to this threshold. If an exception applies (consent, contract, or law), the controller must still implement safeguards and provide transparency. If no exception applies, the processing cannot be solely automated; human intervention must be built into the workflow.
AI Act Transparency Obligations: Practical Implications
The AI Act introduces harmonized transparency duties that apply irrespective of personal data processing. It distinguishes between prohibited practices, high-risk AI systems, and limited-risk systems with specific transparency obligations. The regulation is directly applicable but will be complemented by national implementation measures and guidance. Professionals should anticipate divergent enforcement practices across Member States initially, with harmonization over time through the European Artificial Intelligence Board (EAIB).
Prohibited Practices and Transparency
The AI Act bans certain practices, including:
- Subliminal techniques that distort behavior in a manner causing physical or psychological harm.
- Exploitation of vulnerabilities of specific groups.
- Untargeted scraping of facial images from the internet or CCTV to build or expand facial recognition databases.
- Emotion recognition in workplaces and educational institutions (with narrow exceptions for safety or medical purposes).
- Biometric categorization to infer sensitive attributes (unless lawfully applied for law enforcement).
- Individual predictive policing.
- Real-time remote biometric identification in publicly accessible spaces for law enforcement (with strict safeguards and judicial authorization).
Transparency is not a cure for prohibited practices. Organizations must conduct a legality assessment before deploying any AI that might fall within these categories. Even if a system is not prohibited, using emotion recognition or biometric categorization in sensitive contexts will attract heightened scrutiny and specific transparency duties.
High-Risk AI Systems: User Information and Documentation
High-risk AI systems (listed in Annex III and subject to conformity assessment) must meet stringent requirements, including risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity. The transparency duty in Article 13 AI Act requires that outputs are interpretable by users and that instructions for use include information enabling safe and informed use. This includes:
- The capabilities and limitations of the system, including foreseeable misuse.
- Expected level of accuracy, including metrics and test results.
- Known or foreseeable circumstances that may lead to risks or errors.
- Human oversight measures, including how to intervene or override.
- Information required to interpret the system’s output and to assess its reliability.
For providers, this means technical documentation must contain detailed explanations of the system’s design, training data characteristics, validation methods, and performance metrics. For deployers, it means ensuring that staff are informed and trained, and that end-users receive appropriate notices. In practice, deployers should integrate the provider’s instructions into their own user-facing disclosures, especially where the AI system is embedded in a broader service. If the deployer modifies the intended use or re-trains the model, transparency obligations must be updated accordingly.
Limited-Risk Systems: Chatbots, Emotion Recognition, and Biometric Categorization
The AI Act mandates transparency for systems that interact with humans or classify emotions or biometric attributes. For example, when a user interacts with a chatbot, they must be informed that they are communicating with an AI system. Similarly, where emotion recognition or biometric categorization systems are used, individuals must be informed unless the context is clearly safety-related (e.g., cockpit monitoring). These duties are straightforward but easy to get wrong. Stating “you are chatting with an assistant” is insufficient if the assistant is generative AI that can fabricate information; the notice should indicate the AI nature and advise caution where appropriate. In biometric contexts, notices should specify what attributes are inferred, the purpose, and retention, and provide contact details for exercising rights.
Good Governance Practice: Making Explanations Honest and Useful
Legal compliance sets the floor; good governance sets the ceiling. In AI, honest explanations are a risk management tool. Misleading users about how a system works can lead to DPA enforcement, civil claims, contract disputes, and loss of trust. Governance practices that support transparency include:
- Layered notices: provide immediate, concise information at the point of interaction, with links to more detailed explanations for those who need them.
- Plain language: avoid jargon; use concrete examples to illustrate how decisions are made.
- Calibrated claims: describe uncertainty and limitations; do not present probabilistic outputs as certainties.
- Human oversight disclosure: explain the nature and depth of human review, and how to trigger it.
- Explainability by design: build logging and explanation features into the system, so that explanations can be generated consistently and audited.
- Trade secret protection: document what cannot be disclosed and the legal basis; provide alternative meaningful information that preserves IP while enabling understanding and contestation.
- Periodic review: update explanations as models evolve, data shifts, or new risks emerge.
Organizations should also consider the interplay with competition law and consumer protection. Overstating the capabilities of an AI system can be considered a misleading commercial practice under the Unfair Commercial Practices Directive, enforced by national consumer authorities. In the digital services context, platforms must disclose information about recommender systems and content moderation under the DSA. These obligations are complementary to GDPR and AI Act duties and should be harmonized in a single, coherent information strategy.
Managing the Tension Between Transparency and IP
Providers often worry that explaining the logic will reveal trade secrets. European law recognizes this tension. The GDPR does not require disclosure of source code or detailed model parameters. The AI Act protects confidential information during conformity assessments. The key is to provide a functional explanation that enables understanding and contestation without exposing proprietary design. Techniques include:
- Feature importance summaries (e.g., “the top three factors were X, Y, Z”), without revealing model weights.
- Counterfactual explanations (e.g., “if Z had been lower, the outcome would have been different”), framed in non-technical terms.
- Generalized descriptions of training data characteristics (e.g., “trained on historical claims data from 2015–2022”), avoiding disclosure of specific datasets.
- Clear statements about limitations and known failure modes.
When refusing to disclose certain details on IP grounds, controllers should explain why the information cannot be provided and offer alternative meaningful information. This approach should be documented and consistently applied to demonstrate fairness and accountability.
Examples of Clear, Honest Notices
Below are practical examples of notices that meet GDPR and AI Act transparency duties. They are designed to be concise, intelligible, and honest. They avoid misleading claims and provide actionable information.
Example 1: Automated Credit Decision (GDPR Article 22)
Context: An online lender uses an automated system to evaluate loan applications. The decision is solely automated and has legal effects (approval or denial).
Automated Decision Notice
We use an automated system to evaluate your loan application. This means your application is decided by a computer without human review, unless you request human intervention or contest the decision.
What we use: Your application data, credit history, income information, and publicly available data.
Key factors: The most important factors are your repayment history, current debt level, and income stability.
Accuracy and limits: Our system is accurate in about 85% of cases based on recent tests. It may not capture unusual circumstances. The decision is probabilistic, not certain.
Your rights: You have the right to obtain human review, express your point of view, and contest the decision. You also have the right to access the information we used and to correct inaccurate data.
How to proceed: Contact us at [link/email/phone] within 15 days to request human intervention or to provide additional information.
Example 2: AI-Assisted Recruitment Screening (GDPR Articles 13 and 22)
Context: An employer uses an AI tool to
