< All Topics
Print

Inclusion in AI Systems: Accessibility as a Governance Topic

In the discourse surrounding artificial intelligence, the conversation often gravitates towards existential risk, model capabilities, or economic disruption. Yet, for the legal analyst and the systems architect alike, a more immediate and tangible challenge resides in the alignment of AI systems with the fundamental rights of individuals, specifically regarding inclusion and accessibility. When an AI system mediates access to public services, employment opportunities, or essential private goods, it ceases to be merely a technical artifact and becomes a vector of social participation. If the system is not designed with accessibility at its core, it risks automating exclusion on a scale previously impossible. This article explores the intersection of inclusion, accessibility, and AI governance, examining how European regulatory frameworks are evolving to address these risks not as afterthoughts, but as foundational requirements.

The Regulatory Landscape: A Convergence of Rights and Technical Standards

European governance of AI is not a monolith; it is a complex interplay between horizontal regulations establishing overarching principles and vertical directives addressing specific sectors or technologies. For the professional implementing AI systems, understanding this convergence is critical. The primary instruments at play are the Artificial Intelligence Act (AI Act), the General Data Protection Regulation (GDPR), the Web Accessibility Directive (WAD), and the European Accessibility Act (EAA). While the AI Act addresses the risks inherent in the functioning of AI models, the WAD and EAA focus on the accessibility of the interfaces and services that deliver the AI’s output.

The intersection of these frameworks creates a layered compliance environment. An AI system used by a public sector body must not only comply with the risk classifications of the AI Act but also ensure that the user interface (UI) and user experience (UX) of the resulting service meet the accessibility requirements of the WAD. Similarly, a private company using AI to provide customer service chatbots or self-service terminals falls under the scope of the EAA.

The Principle of Non-Discrimination in Automated Decision-Making

At the heart of inclusion governance lies the prohibition of discrimination. Under GDPR, Article 22 provides individuals with the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning them or similarly significantly affects them. While this right is not absolute, it establishes a baseline of human oversight. However, inclusion risks often arise before a decision is made, embedded within the training data and the feature selection of the model.

From a governance perspective, the challenge is that discrimination in AI is often “proxy discrimination.” A model may not use a protected characteristic (like gender or ethnicity) as an input, but it may use correlated data points (like postal code or purchase history) to replicate the same exclusionary patterns. European regulators are increasingly looking at “disparate impact” assessments, requiring system operators to demonstrate that their models do not disproportionately disadvantage specific groups defined by disability, age, or other protected grounds.

Defining Accessibility in the Age of AI

Traditionally, accessibility has been associated with physical environments or digital content (websites and software). The Web Content Accessibility Guidelines (WCAG) serve as the technical benchmark here. However, AI introduces new dimensions. Accessibility is no longer just about whether a screen reader can parse a webpage; it is about whether an AI-driven service can be understood by a person with cognitive disabilities, or whether a voice assistant can recognize a speech pattern affected by a motor impairment.

“Accessibility in AI is not a feature; it is a prerequisite for the legal operation of high-risk systems in public spaces.”

The EAA, which is currently being transposed into national laws across member states, expands the definition of accessibility to include “products and services.” This covers ATMs, ticketing machines, computers, and operating systems—all of which are increasingly powered by AI. The governance requirement here is that these products must be accessible by design, allowing persons with disabilities to use them independently.

Identifying Inclusion Risks in AI Systems

To govern inclusion effectively, one must first understand the specific vectors through which exclusion manifests in AI systems. As an AI systems practitioner, I observe that these risks generally fall into three categories: data bias, interface incompatibility, and contextual opacity.

Data Bias and Representation Gaps

The most cited risk is algorithmic bias stemming from unrepresentative training data. If a facial recognition system is trained primarily on images of light-skinned males, it will fail to recognize dark-skinned females. This is not merely a technical error; it is an exclusionary failure that prevents individuals from accessing services secured by biometric authentication.

From a regulatory standpoint, the AI Act classifies biometric systems as high-risk (or, in certain contexts, prohibited). The governance obligation falls on the provider to perform data quality assessments. This involves checking for “representation gaps”—statistical voids where specific demographics are underrepresented. Mitigation requires not just collecting more data, but curating datasets that reflect the diversity of the population the AI will serve.

Interface and Interaction Barriers

Even a perfectly fair algorithm becomes an instrument of exclusion if the interface to interact with it is inaccessible. Consider an AI-driven recruitment portal that requires candidates to complete a visual captcha to prove they are human. For a visually impaired candidate using a screen reader, this step can be an insurmountable barrier, effectively barring them from the application process.

Similarly, conversational AI (chatbots) often relies on complex, multi-turn interactions. For individuals with cognitive disabilities or those who speak a minority dialect, these interactions can be confusing or frustrating. Governance must extend to the usability of the AI, mandating that fallback options (such as switching to a human agent) are readily available and clearly signposted.

Opacity and the “Black Box” Problem

Inclusion requires trust, and trust requires understanding. When an AI system denies a loan, rejects a job application, or flags a tax return for audit, the affected individual often has no insight into the “why.” This opacity disproportionately affects vulnerable groups who may lack the resources or expertise to challenge the decision.

The “right to an explanation” under GDPR is a legal attempt to pierce this veil. However, technical explainability remains a challenge. Governance frameworks are pushing for “interpretability by design,” where systems are built to generate audit trails and human-readable justifications for their outputs. This is particularly crucial in healthcare AI, where a diagnosis must be explainable to the patient to ensure informed consent.

Operationalizing Governance: From Theory to Practice

For professionals tasked with deploying AI, governance is not an abstract concept; it is a set of operational procedures. The AI Act introduces a risk-based approach that dictates the rigour of these procedures. High-risk AI systems (e.g., those used in critical infrastructure, education, employment, and law enforcement) face the strictest obligations.

Conformity Assessments and CE Marking

Under the AI Act, providers of high-risk AI systems must undergo a conformity assessment before placing the system on the market. This assessment is not merely a paperwork exercise; it requires technical documentation demonstrating that the system complies with “essential requirements.”

Regarding inclusion, this documentation must detail:

  • The measures taken to ensure the system is accessible to persons with disabilities.
  • The data governance measures used to prevent bias.
  • The level of accuracy and robustness against errors.

Once the assessment is complete, the provider issues an EU declaration of conformity and affixes the CE marking. For system integrators, the obligation is to verify these markings and ensure that the high-risk AI system is used in accordance with the instructions for use.

Testing and Evaluation Frameworks

Testing for inclusion requires moving beyond standard unit tests. It necessitates the use of “adversarial testing” or “red-teaming” specifically focused on bias and accessibility.

Red-Teaming for Bias: This involves deliberately attempting to trick the AI into producing discriminatory outputs. For example, testing a hiring algorithm to see if it downgrades resumes containing the word “maternity” or “disability.”

Accessibility Audits: These are technical assessments against WCAG standards. However, for AI, they must also include usability testing with diverse user groups. The governance requirement is to document the results of these tests and the remediation steps taken.

Timeline Note: The AI Act entered into force in mid-2024. The obligations for prohibited AI systems apply after six months, while the obligations for general-purpose AI models apply after 12 months. The full regime for high-risk AI systems, including conformity assessments, applies after 24 months (approx. mid-2026). Organizations should use this period to establish robust testing regimes.

Procurement Criteria: The Lever of Public Buyers

Public procurement is a powerful tool for enforcing inclusion standards. The European Union’s Public Procurement Directive allows contracting authorities to include social criteria in their tenders. As AI is increasingly procured by the public sector, these criteria are shifting.

A public buyer can, and increasingly does, require that a prospective AI supplier demonstrates:

  1. Compliance with the Web Accessibility Directive for the user interface.
  2. Transparency regarding the data used to train the model.
  3. Commitments to provide reasonable accommodations for users with disabilities.

This creates a market incentive. Suppliers who ignore inclusion risk being locked out of lucrative public contracts. In countries like the Netherlands and Sweden, procurement guidelines are already very specific about requiring “responsible AI” statements from vendors.

National Implementations and Cross-Border Nuances

While the AI Act is a Regulation (directly applicable in all Member States), its implementation relies on national authorities. Furthermore, the transposition of directives like the EAA varies, creating a patchwork of enforcement.

The Role of National Competent Authorities (NCAs)

Each Member State must designate a market surveillance authority to oversee high-risk AI systems. In Germany, this might involve existing bodies like the Federal Office for Information Security (BSI) or the data protection authorities. In France, the Commission Nationale de l’Informatique et des Libertés (CNIL) plays a significant role.

For a multinational corporation, this means that an AI system deployed in a factory in Spain might be subject to different scrutiny than the same system deployed in Italy, depending on how the respective NCAs interpret “bias” or “accessibility.” It is crucial for governance teams to monitor national guidance papers and administrative rulings.

Comparative Approaches to Accessibility

Looking at the EAA transposition, we see interesting divergences. Finland has historically been proactive in digital accessibility, often exceeding minimum EU requirements. France has strict laws regarding “surdigital” (digital inclusion) that impact how AI interfaces are designed. Spain has updated its “Ley de Accesibilidad” to align with the EAA, focusing heavily on public-facing digital services.

These differences matter for software vendors. A “one-size-fits-all” approach to accessibility may fail to meet the specific transposition requirements of a particularly rigorous member state. The governance best practice is to aim for the highest common denominator—usually WCAG 2.1 AA or AAA—to ensure pan-European compliance.

Future-Proofing: The EU Accessible Technology Act

It is worth noting that the European Commission has proposed a new European Accessibility Act for Technology (often referred to as the “Accessible Tech Act,” distinct from the EAA which covers products/services). This proposal aims to ensure that the underlying technology stacks—operating systems, file formats, and AI engines—are accessible by default.

For AI developers, this signals a shift. Currently, many developers rely on third-party libraries or cloud services to handle accessibility. If the Accessible Tech Act is adopted, the liability for inaccessible AI outputs may shift upstream to the model providers and infrastructure vendors, rather than just the UI developers. This is a critical evolution to watch.

Practical Steps for Governance Professionals

How does one operationalize these requirements in a corporate or institutional setting? The approach must be multidisciplinary, involving legal, technical, and ethical expertise.

1. Establish an AI Ethics Board

Many organizations are establishing internal ethics boards or review committees. To be effective regarding inclusion, this board must include representatives from disability advocacy groups or experts in accessibility. It should have the authority to block the deployment of systems that fail inclusion criteria.

2. Documentation as a Defense

In the event of a regulatory audit or a lawsuit, documentation is the primary defense. The AI Act mandates the keeping of “technical documentation” and “logs” throughout the lifecycle. For inclusion, this means documenting:

  • Dataset Composition: Who is represented, and who is missing?
  • Pre-processing Steps: How were imbalances corrected?
  • Testing Results: What were the failure rates for different user groups?

3. Human-in-the-Loop (HITL) as an Accessibility Feature

While HITL is often discussed in the context of high-risk decisions, it is also an accessibility accommodation. If an AI-driven service fails to interact correctly with a user with a disability, a seamless handover to a human agent is a governance requirement. This “circuit breaker” must be robust and monitored.

4. Continuous Monitoring

AI systems drift. A model that is inclusive today may become biased tomorrow as it ingests new data. Governance requires continuous monitoring of model performance across different demographic slices. If the accuracy for a specific group drops below a threshold, the system should trigger a retraining or review cycle.

The Intersection of Biometrics and Accessibility

Biometric AI systems present a unique set of inclusion challenges. Facial recognition is the most prominent example. While it offers convenience for authentication, it frequently fails for individuals with darker skin tones, women, and the elderly. More critically, for individuals with certain facial disfigurements or conditions like prosopagnosia (face blindness), biometric systems can be fundamentally unusable.

Regulatory responses are bifurcating. The AI Act bans certain uses of biometrics (like real-time remote identification in public spaces) but allows others (like authentication). For the latter, the governance requirement is “liveness detection” and accuracy standards that must be met across demographic groups. If a biometric system cannot achieve a certain accuracy for a protected group, it may be deemed non-compliant, regardless of its overall average accuracy.

Alternative Authentication Methods

From an inclusion perspective, governance mandates the provision of “reasonable alternatives.” If a bank app relies solely on facial recognition for login, it excludes those who cannot use it. A compliant governance framework requires that equivalent security be provided via other means (e.g., hardware tokens, PINs, or voice recognition, provided voice recognition is also accessible).

Algorithmic Impact Assessments (AIA)

Borrowing from environmental impact assessments, the AIA is becoming a standard governance tool. Before deploying a high-risk AI system, an organization should conduct an AIA to evaluate potential risks to fundamental rights, including inclusion.

An effective AIA asks:

  1. Context: In what environment will the AI operate? (e.g., a hospital vs. a marketing department).
  2. Data: Does the data reflect the reality of the user base?
  3. Stakeholders: Have persons with disabilities been consulted?
  4. Impact: What is the worst-case scenario for an excluded user? (e.g., denial of healthcare).

Conducting an AIA is not just a best practice; it is becoming a legal expectation under the AI Act for high-risk systems. It forces organizations to think about inclusion before code is written, rather than as a bug-fixing exercise later.

Conclusion: The Path to Inclusive AI

The governance of inclusion in AI is not a static checklist. It is a dynamic process of alignment between technology, law, and human rights. For professionals in Europe, the regulatory compass is clear: the AI Act, GDPR, and accessibility directives form a unified front demanding that AI systems be safe, transparent, and accessible.

Failure to address inclusion risks carries not only ethical weight but significant legal and financial liability. The penalties under the AI Act for non-compliance are severe, and the reputational damage of deploying an exclusionary system can be devastating. Conversely, organizations that master the governance of inclusion will find themselves with more robust, reliable, and widely accepted AI systems.

Ultimately, the goal is to ensure that the AI revolution leaves no one behind. This requires a commitment to “accessibility by design” and a governance framework that treats inclusion as a non-negotiable requirement, woven into the very fabric of the systems we build.

Table of Contents
Go to Top