Responsible AI Communication: Avoiding Overclaims and Underwarnings
Communicating the capabilities and limitations of artificial intelligence systems within the European Union has evolved from a marketing exercise into a core compliance function. The era of aspirational branding and abstract promises is closing, replaced by a legal framework that demands precision, transparency, and evidence. For professionals deploying AI in robotics, biotechnology, data systems, and public services, the language used in technical documentation, user interfaces, privacy policies, and public statements is now a direct subject of regulatory scrutiny. Mischaracterizing a system’s proficiency or failing to adequately warn of its risks is no longer merely a reputational liability; it constitutes a breach of the AI Act, potentially triggering significant financial penalties and operational restrictions. This article analyzes the mechanisms for achieving responsible AI communication, focusing on the interplay between the EU’s harmonized rules and the practical realities of system design and deployment.
The Regulatory Architecture of AI Communication
The European Union’s approach to regulating AI communication is anchored in the Artificial Intelligence Act (AI Act). This regulation establishes a legal framework that directly governs how providers and deployers present their systems to end-users. The core objective is to prevent a “responsibility gap” where users are encouraged to place trust in systems that are not designed to handle the tasks they are purported to perform, or where the risks of doing so are obscured. The AI Act does not regulate speech in a vacuum; it ties communication obligations to the risk classification of the AI system itself. High-risk AI systems face the most stringent duties, but even systems classified as limited or minimal risk are subject to transparency requirements designed to protect the autonomy and decision-making of natural persons.
It is crucial to understand that while the AI Act provides the overarching European standard, its implementation relies on national authorities. The regulatory landscape is therefore a composite of EU-level harmonization and national enforcement mechanisms. The European Commission will designate a European AI Office to coordinate oversight, but market surveillance authorities in individual Member States—such as the Federal Office for Information Security (BSI) in Germany, the Commission Nationale de l’Informatique et des Libertés (CNIL) in France, or the Data Protection Commission (DPC) in Ireland—will be the primary entities investigating non-compliant communication. Consequently, a statement made in a public press release or a user manual may be judged under the same legal framework across the EU, but the investigation and penalty assessment will occur within the specific legal culture of the Member State where the harm or violation was reported.
Communication as a Component of Conformity
In the context of high-risk AI systems, communication is not an optional add-on; it is a prerequisite for market access. Under the AI Act, a provider cannot place a high-risk system on the market or put it into service unless it has undergone a conformity assessment and ensures compliance with a long list of obligations. Many of these obligations, particularly those regarding risk management and data governance, are invisible to the end-user. However, the obligations regarding instructions for use and transparency are the visible interface of compliance. The law effectively treats the documentation and user-facing communication as a safety component. If the instructions are unclear, incomplete, or misleading, the system is considered non-compliant, regardless of its technical performance.
This creates a distinct challenge for engineering teams. The technical documentation required by Annex IV of the AI Act must be precise, but the instructions for use must be understandable to the intended user. The regulatory expectation is that the provider has analyzed the reasonably foreseeable misuse of the system and communicated that risk clearly. For example, if a biometric identification system is marketed as “real-time” but requires specific lighting conditions or database latency that makes true real-time operation impossible in 20% of scenarios, the marketing and technical documentation must reflect that limitation. To omit it is to mislead the market surveillance authority and the user, creating a dual legal exposure under the AI Act and potentially unfair commercial practices legislation.
Distinction Between EU Regulation and National Implementation
While the AI Act harmonizes the definition of obligations, national laws govern the procedural aspects of enforcement and redress. A provider operating across Europe must be aware that the “look and feel” of compliance may differ. For instance, in Germany, the Bundesverwaltungsungsverfahren (administrative procedure law) will govern how the BSI investigates a violation of the AI Act. In contrast, in Spain, the Agencia Española de Protección de Datos (AEPD) may integrate these checks into its existing robust data protection enforcement framework.
Furthermore, the AI Act explicitly states that it is without prejudice to existing EU legislation on consumer protection and data protection. This means that a communication strategy must be compliant with the AI Act, the GDPR, and the Unfair Commercial Practices Directive (UCPD). A claim that an AI system “guarantees” a medical diagnosis, for example, violates the UCPD’s prohibition on misleading aggressive commercial practices, violates the AI Act’s requirement to communicate limitations accurately, and potentially violates GDPR if it implies a level of data processing accuracy that does not exist. The interplay requires a holistic view of communication law, not a siloed focus on AI regulation alone.
Defining Overclaims: The Prohibition of Illusory Capabilities
Overclaiming occurs when the description of an AI system’s capabilities exceeds its actual technical and functional reality. In the regulatory context, this is not viewed as “puffery” (exaggerated praise not meant to be taken literally) but as a material misrepresentation. The AI Act and associated consumer laws treat the capabilities of a system as a material fact upon which a user bases their decision to use or deploy it. When an AI provider claims that a system is “autonomous,” “unbiased,” or “error-proof” without qualification, they are creating a legal expectation of performance that the system likely cannot meet.
The risk of overclaiming is particularly acute in high-stakes domains such as healthcare, finance, and critical infrastructure. A provider of an AI tool for recruitment might claim that the tool “identifies the best candidates.” This is a subjective claim, but if the underlying algorithm relies on historical data that reflects past discriminatory hiring practices, the tool is actually automating exclusion. To claim it identifies the “best” candidates is a factual misrepresentation of the system’s function. Under the AI Act, this could affect the classification of the system or the validity of the risk management system, which must account for such biases.
The “Human-in-the-Loop” Fallacy
A common area of overclaiming is the presentation of “human-in-the-loop” or “human oversight” as a mitigating factor for high-risk AI. Providers often argue that because a human is monitoring the system, the risks are reduced, allowing for less stringent documentation or marketing claims. However, the AI Act sets a high bar for what constitutes effective human oversight. The human operator must have the competence, training, and authority to intervene or override the system.
If marketing materials or operational manuals suggest that a human supervisor can effectively catch every error made by a high-speed algorithmic trading system or a real-time medical monitoring device, this is an overclaim. It creates a false sense of security. The regulatory reality is that human oversight is a risk-mitigation measure of last resort, not a guarantee of safety. Responsible communication requires describing the oversight mechanism realistically: “The system provides decision support; the human operator retains full responsibility for the final decision and must verify the output against [specific criteria].”
Comparative Approaches to Commercial Claims
Across Europe, national authorities are increasingly aggressive in policing AI claims. The UK’s Competition and Markets Authority (CMA), while no longer part of the EU, provides a useful benchmark for the direction of travel. The CMA has published guidance on the use of “AI” in consumer products, signaling that simply labeling a product as “AI-powered” when it uses basic automation can be misleading. In the EU, the French Directorate-General for Competition, Consumer Affairs and Fraud Control (DGCCRF) has historically been rigorous in checking technical claims against product reality.
For a pan-European operator, the safest route is to adopt the strictest interpretation of claims found in the jurisdiction where they operate. Claims of “intelligence” must be qualified. Claims of “predictive power” must be accompanied by accuracy metrics and the context in which those metrics were derived. The burden of proof lies with the provider to demonstrate that the claim is accurate. In the event of a dispute, the provider cannot rely on the user’s lack of technical understanding to justify vague or inflated claims.
Underwarnings: The Liability of Omission
While overclaiming involves asserting what the system can do, underwarning involves failing to disclose what it cannot do—or what it might do unexpectedly. Under the AI Act, underwarning is a specific violation of transparency obligations. The legislature recognized that AI systems, particularly those based on machine learning, are inherently probabilistic and non-deterministic. They operate with a margin of error and are susceptible to “hallucinations” (confident but incorrect outputs) or “drift” (degradation of performance over time).
Underwarning creates a “false negative” scenario for the user. The user believes the environment is safe or the output is reliable because no warning to the contrary has been given. This is distinct from a technical failure; it is a failure of the provider’s duty to inform. For example, a generative AI system used to draft legal contracts might be perfectly capable of drafting standard clauses but should not be used to interpret complex jurisdictional disputes. If the provider fails to explicitly warn against this specific use case, they are underwarning.
Contextualizing Risks and Probabilities
The AI Act requires that instructions for use include “instructions for the intended purpose” and “any other instructions, including as applicable, in relation to the use of the system.” This implies that the provider must anticipate misuse. Underwarning often stems from a reluctance to highlight the system’s brittleness. For instance, in computer vision systems used for quality control in manufacturing, the system may perform well on standard defects but fail catastrophically on novel defects. A responsible communication strategy would explicitly state: “The system is trained on defect types A, B, and C. It has not been validated for defect types D or E, and may fail to detect them.”
Failure to provide this granularity is underwarning. It deprives the deployer of the information needed to design a safe workflow. In the context of biotech, where AI might assist in protein folding or drug interaction prediction, the system must clearly delineate the boundaries of its simulation capabilities. A statement such as “This tool predicts interactions” is insufficient. A better statement is: “This tool predicts interactions based on [specific dataset]; it has not been validated for [specific class of molecules] and should not be used as a sole source for clinical trial design.”
The Intersection with GDPR and Right to Explanation
Underwarning often overlaps with violations of the GDPR, specifically regarding the “right to be informed” and the “right not to be subject to a decision based solely on automated processing.” If an AI system makes a decision that has legal or significant effects—for example, a system that triages loan applications—the user must be informed that they are interacting with an AI system and of the logic involved, as well as their right to challenge the decision. Underwarning here means failing to make these rights and the system’s nature prominent and accessible.
European data protection authorities have fined companies for opaque algorithmic decision-making. The AI Act reinforces these obligations. A deployer cannot hide an AI system behind a “human” interface if the decision is automated. The communication must be explicit: “This eligibility check is performed by an automated system. A human review is available upon request.” Underwarning by obscuring the automated nature of the process is a direct violation of the transparency principles shared by both the GDPR and the AI Act.
Operationalizing Responsible Communication
To translate these legal principles into practice, organizations must move beyond legal review of marketing copy. Responsible communication requires an operational framework that integrates legal compliance with technical reality. This involves the entire lifecycle of the AI system, from the initial design of the user interface to the training of customer support staff.
The AI Act mandates that providers establish a risk management system that is a continuous iterative process. Communication strategies must be treated as part of this system. As the AI system is updated, or as new risks are identified through post-market monitoring, the communication materials (including the instructions for use and the information supplied to users) must be updated accordingly. This is not a static compliance task; it is a dynamic governance requirement.
Technical Documentation and the Instructions for Use
The instructions for use are the primary vehicle for compliance. Under Annex IV of the AI Act, these instructions must contain, at a minimum, the intended purpose, the level of accuracy, robustness, and cybersecurity, and the characteristics, capabilities, and limitations of performance. Translating this into user-friendly language is a significant challenge.
Consider the requirement to disclose limitations. A technical specification might list “False Positive Rate: 0.5%.” The instructions for use must translate this into operational reality: “The system may incorrectly flag 1 in 200 legitimate transactions as fraudulent. You must have a process to review these flagged transactions.” This translation from metric to operational instruction is the essence of responsible communication. It bridges the gap between the developer’s understanding and the user’s reality.
Furthermore, for high-risk systems, the instructions must include the “credentials, competence, and skills required” for human oversight. This is a direct instruction on communication: the provider must tell the user what kind of person is qualified to supervise the AI. If the provider suggests that a layperson can supervise a complex medical AI, they are underwarning regarding the competence required, potentially exposing the deployer to liability for improper use.
Managing Generative AI and “Hallucinations”
Generative AI (General Purpose AI, or GPAI) introduces specific communication challenges due to its tendency to hallucinate. The AI Act imposes specific transparency obligations on GPAI providers, requiring them to disclose that content is generated by AI. However, for professional users (B2B), this is insufficient.
When a law firm uses a Large Language Model (LLM) to summarize case law, the risk is not just that the user knows it is AI-generated; the risk is that the AI might invent case law (hallucinate). Responsible communication requires explicit warnings embedded in the workflow. For example, the interface should include persistent reminders: “Always verify citations against official legal databases.” This is not just good advice; it is a mitigation measure against the specific risk of the technology. Failure to embed such warnings in the user interface or API documentation constitutes underwarning.
Organizations deploying these models must also be careful not to overclaim the security of the data processed. Marketing materials often claim “enterprise-grade security,” but if the model is a public cloud instance where data is used for retraining (unless explicitly opted out), the claim may be misleading. The communication regarding data usage must be precise and aligned with the privacy policy.
Training and Internal Governance
Responsible communication extends to the internal training of staff. Sales and marketing teams must be educated on the specific limitations of the AI systems they are selling. They must understand that claiming “bias-free” is prohibited and that claiming “100% accuracy” is legally indefensible. Organizations should implement “guardrails” for sales claims, perhaps a checklist of prohibited terms and required qualifiers.
Similarly, customer support teams need to be trained on how to handle user reports of AI errors. If a user reports a hallucination or a biased output, the response must be standardized to acknowledge the issue and log it for the post-market monitoring system. A generic “the system is working as intended” response to a valid report of an AI error can be interpreted as a continued underwarning of the system’s limitations.
Monitoring and Redress
The AI Act requires a post-market monitoring system that is based on a plan for the continuous collection of performance data. This data is crucial for communication. If monitoring reveals that a system’s accuracy drops significantly in a specific geographic region or with a specific demographic, the provider must update the instructions for use and inform deployers. This closes the loop between reality and communication.
Underwarning is often discovered through user feedback. Therefore, the mechanisms for users to report difficulties or unexpected outputs must be clear and accessible. The communication of these mechanisms is part of the regulatory obligation. Users of high-risk systems must be provided with a point of contact for issues related to the AI system’s performance. If this contact is buried in a generic support page, it may be deemed insufficient.
The Role of Standardization
To navigate these requirements, European standardization bodies (CEN-CENELEC) are developing Harmonised Standards. While these are voluntary, compliance with them provides a “presumption of conformity” with the AI Act. These standards are likely to include detailed specifications on how to present information to users, the readability of instructions, and the specific metrics that must be disclosed. Staying abreast of these standards is essential for any organization wishing to streamline its communication compliance. Relying solely on the text of the regulation without monitoring the evolving standardization landscape is a risky strategy.
Conclusion: The Shift to Evidentiary Communication
The regulatory environment in Europe is shifting the paradigm of AI communication from persuasive to evidentiary. The burden is now on the provider to prove, through documentation, testing, and clear language, that their claims are valid and their warnings are adequate. This requires a multidisciplinary approach where legal teams, AI developers, and user experience designers collaborate to create a communication ecosystem that respects the user’s intelligence and their legal rights.
For professionals in the European market, the message is clear: the era of “move fast and break things” is over. The legal framework now mandates that you move deliberately, explain clearly, and document everything. The way an AI system is described in a press release, a user manual, or an API response is now as critical as the code itself. It is a regulated artifact that must withstand the scrutiny of market surveillance authorities, courts, and the public. Ensuring that communication is neither overly optimistic nor dangerously silent is the new standard of care in the European AI ecosystem.
