< All Topics
Print

How National Authorities Enforce EU AI Rules

Understanding the enforcement of the European Union’s artificial intelligence rules requires a shift in perspective from the abstract text of the Regulation to the concrete machinery of national administration. While the Artificial Intelligence Act (AI Act) establishes a harmonized legal framework at the EU level, its practical application—detecting non-compliance, investigating providers, and imposing penalties—is largely the domain of Member State authorities. This decentralized enforcement model, coordinated through a new European governance structure, creates a complex ecosystem where legal obligations meet operational reality. For professionals developing, deploying, or sourcing AI systems within Europe, grasping the mechanics of this ecosystem is as critical as understanding the text of the law itself.

The National Enforcement Architecture

The AI Act does not create a single, centralized EU-wide enforcement body that directly polices every company. Instead, it relies on a network of Market Surveillance Authorities (MSAs) designated by each Member State. These authorities are the primary actors responsible for ensuring that AI systems placed on the market or put into service comply with the Regulation. The structure of these authorities varies significantly across the Union, reflecting different national administrative traditions.

In some countries, enforcement responsibilities are consolidated within a single, powerful digital regulator. For instance, in France, the Commission Nationale de l’Informatique et des Libertés (CNIL) is expected to take a leading role, leveraging its existing expertise in data protection and algorithmic oversight. In Germany, the landscape is more fragmented; the Federal Commissioner for Data Protection and Freedom of Information (BfDI) may handle aspects related to personal data, while specific technical market surveillance bodies, potentially under the Federal Ministry for Economic Affairs and Climate Action, could oversee product safety aspects of AI systems. Spain has established the Spanish Agency for the Supervision of Artificial Intelligence (AESIA), a dedicated body specifically created to oversee AI, signaling a move towards specialized enforcement.

These national authorities are not operating in isolation. The AI Act establishes a European Artificial Intelligence Board (EAIB), composed of representatives from each national authority and the European Data Protection Board (EDPB). The EAIB’s role is to facilitate the consistent application of the Regulation across the EU. It issues guidelines, harmonizes methodologies for testing high-risk AI systems, and can issue binding decisions in cases where a national authority intends to prohibit an AI system or where there is disagreement between authorities regarding a provider’s conformity assessment. This creates a two-tier system: national execution supported by EU-level coordination.

Division of Competences: Who Does What?

The allocation of enforcement tasks within a Member State often depends on the nature of the AI system and the sector in which it is deployed. The AI Act is a horizontal regulation, but it intersects with many sector-specific laws.

General-Purpose AI (GPAI) Models

For providers of General-Purpose AI models, the enforcement is primarily centralized at the EU level. The European Commission, supported by a new AI Office, is the lead regulator for GPAI providers. However, national authorities retain a role, particularly concerning the downstream impact of these models. If a national MSA detects that a GPAI model, even if compliant at the model level, is enabling systemic risks or being used in a high-risk application without proper safeguards, it can trigger a coordinated investigation involving the Commission and the AI Board.

High-Risk AI Systems

For high-risk AI systems listed in Annex III (e.g., biometrics, critical infrastructure, employment), the national MSAs are the primary enforcers. Their jurisdiction covers the entire lifecycle of the system within their territory. This includes:

  • Providers established in the EU.
  • Providers established outside the EU (third countries) if their systems are used in the EU market.
  • Deployers (users) established in the EU, particularly in high-risk contexts.

If an AI system is integrated into a product that already falls under other EU product safety legislation (e.g., medical devices, machinery, vehicles), the market surveillance authorities responsible for those specific sectors (e.g., medical device regulators) will typically take the lead, integrating AI-specific checks into their existing procedures. This “sectoral” approach leverages existing expertise but requires close coordination to ensure AI-specific risks are not overlooked.

Initiating an Investigation: Triggers and Intelligence Gathering

Market surveillance authorities do not randomly audit AI systems. Investigations are typically triggered by specific events or intelligence. Understanding these triggers allows organizations to anticipate scrutiny and prioritize compliance efforts.

Complaints and Incident Reporting

A significant source of investigative triggers is complaints. These can originate from individuals harmed by an AI system, consumer protection organizations, trade unions, or even competitors alleging unfair practices. The AI Act explicitly encourages Member States to ensure that natural or legal persons have a right to lodge complaints with MSAs.

Another critical trigger is the serious incident reporting obligation. Providers of high-risk AI systems are legally required to report any serious incident to their national MSA. A “serious incident” is defined as an incident that leads to the death or serious injury of a person, serious disruption of the management or operation of critical infrastructure, a violation of fundamental rights, or serious economic or societal harm. A report of a serious incident almost invariably triggers a formal investigation to determine if the provider complied with design, documentation, and reporting obligations.

Routine Surveillance and Proactive Monitoring

Authorities are also moving towards proactive monitoring. This may involve:

  • Market monitoring activities: Systematically reviewing AI systems available on the market, particularly those advertised with high-risk capabilities.
  • Algorithmic Audits: Requesting access to algorithms or models for testing, especially in the public sector or for systems used in sensitive areas like social scoring or law enforcement.
  • Cooperation with Data Protection Authorities: DPAs are often the first to spot AI systems that process personal data in non-compliant ways. A DPA investigation can easily spill over into an AI Act investigation if the system is classified as high-risk.

In some jurisdictions, like the UK (prior to its divergence), regulators used “sandbox” environments to engage with innovators proactively. The EU AI Act encourages the creation of AI regulatory sandboxes, which allow providers to test innovative technologies under regulatory supervision. While sandbox participation is voluntary, it can lead to the discovery of compliance gaps that trigger formal corrective actions before market placement.

Referrals from Other Authorities

Given the horizontal nature of the AI Act, referrals are common. A customs authority might stop an import of a product containing an AI system and refer it to the relevant MSA. A national competition authority investigating algorithmic collusion might refer technical findings to the AI regulator. A financial supervisory authority might flag a credit scoring AI system that fails to meet transparency requirements.

The Investigative Process: Evidence and Information Requests

Once an investigation is opened, the powers of the Market Surveillance Authorities are extensive. They are not limited to reviewing self-declared conformity documents. They can demand comprehensive evidence to verify compliance.

The “Request for Information”

The most common tool is the Request for Information (RFI). An MSA will formally request a provider (or deployer) to provide specific documentation within a set deadline (usually 15 to 30 days, though urgent cases may be shorter). An RFI is not a casual inquiry; it is a formal legal step. Ignoring an RFI or providing false or misleading information is a violation of the AI Act and can lead to administrative fines.

What do they ask for? The list is extensive and mirrors the obligations in the Regulation:

  • Technical Documentation: This is the core evidence. It must demonstrate how the system was designed and tested. It includes descriptions of the system’s capabilities, limitations, intended purpose, the data sets used for training, validation, and testing, the cybersecurity measures, and the risk management system.
  • Design Documentation: System architecture, logic, and algorithms. For high-risk systems, authorities may request access to source code or model weights, although this is usually done under strict confidentiality agreements and often via a “supervised testing” environment rather than unrestricted access.
  • Conformity Assessment Documentation: If the system underwent a third-party conformity assessment by a Notified Body, the MSA will request the full assessment report and the EU declaration of conformity.
  • Quality Management System (QMS) Records: Providers of high-risk AI systems must have a QMS similar to those required for medical devices. Authorities will audit these records to ensure processes for design, verification, and post-market monitoring are robust.
  • Post-Market Monitoring (PMM) Data: Authorities will review the data collected by the provider on the performance of the AI system in the real world. They want to see that the provider is actively monitoring for drift, bias, or emerging risks and has a procedure for reporting serious incidents.

Access to Premises and Data

If an RFI is insufficient, authorities can escalate. The AI Act grants MSAs the power to:

  • Enter any premises or land where AI systems are developed, manufactured, stored, or used.
  • Perform tests and checks on the AI system or its underlying data.
  • Seize or take samples of the AI system for analysis.

This is where the investigation becomes tangible and disruptive. For providers of GPAI models, the AI Office has similar powers, including requesting access to models and associated information to assess systemic risks. The Act also includes provisions for unannounced inspections (“dawn raids”), although these are typically reserved for cases where there is suspicion of serious non-compliance or obstruction.

Cooperation and the Burden of Proof

It is a common misconception that the regulator must prove non-compliance from scratch. In practice, the provider bears the burden of demonstrating that their AI system complies with the Act. The RFI process is designed to test this. If a provider fails to produce the required technical documentation, the MSA can infer that the system is not compliant. Documentation is not a mere formality; it is the primary legal defense.

Cooperation is expected. While providers have rights to due process and legal representation, obstructing an investigation (e.g., by delaying responses, providing incomplete data, or denying access where legally permitted) will be viewed negatively and can lead to higher penalties.

Typical Outcomes and Corrective Measures

Investigations do not always end in fines. The AI Act establishes a graduated scale of enforcement actions, prioritizing the removal of non-compliant systems from the market over punitive financial penalties, especially for first-time or less severe violations.

Non-Financial Sanctions

The most immediate actions are administrative orders:

  • Corrective Measures: The MSA may order the provider to bring the AI system into compliance within a specific timeframe. This could involve updating documentation, improving the risk management system, modifying the algorithm to reduce bias, or adding new transparency disclosures to the user.
  • Withdrawal: The MSA can order that the AI system be withdrawn from the market. This means it can no longer be made available.
  • Recall: If the system has already been supplied to users, the MSA can order a recall, requiring the provider to retrieve the system from deployers.
  • Prohibition: In the most severe cases, particularly for AI systems that present a risk to health, safety, or fundamental rights that cannot be mitigated, the MSA can prohibit the system entirely. This is a drastic measure that requires notification to the EAIB and the Commission.

For GPAI models, the AI Office can impose symbolic penalties for non-compliance with transparency obligations or can trigger a full investigation into systemic risks, potentially leading to a prohibition on placing the model on the EU market if it is deemed to present an unacceptable risk.

Administrative Fines

Fines are a key deterrent, but they are not the first tool reached for. The AI Act sets out a tiered fine structure:

  • Up to €35 million or 7% of total worldwide annual turnover (whichever is higher) for violations of the prohibited AI practices (e.g., social scoring by public authorities).
  • Up to €15 million or 3% of total worldwide annual turnover for violations of the high-risk AI system obligations, the AI literacy obligations, or the obligations for GPAI providers.
  • Up to €7.5 million or 1.5% of total worldwide annual turnover for the supply of incorrect, incomplete, or misleading information to Notified Bodies or national authorities.

When deciding on the amount of the fine, national authorities will consider the nature, gravity, and duration of the infringement, any action taken to mitigate the damage, and the degree of cooperation. It is important to note that fines for non-compliance with the AI Act are imposed by national authorities, but the rules for calculating them are set at the EU level to ensure consistency.

Corrective Actions vs. Fines

In many cases, an investigation will conclude with the provider agreeing to implement corrective measures without a fine being imposed, particularly if the non-compliance was unintentional and the provider acted swiftly to rectify the situation. However, if the investigation reveals that the provider was aware of the risks or deliberately ignored obligations, a fine is highly likely in addition to corrective measures. The decision to fine is often linked to the concept of negligence or intent.

Practical Implications for Providers and Deployers

The enforcement landscape described above has concrete implications for how AI systems are developed, documented, and deployed in Europe.

Documentation as a Living Process

The era of treating technical documentation as a post-development chore is over. Under the AI Act, documentation is a living process that must be maintained throughout the system’s lifecycle. Providers must ensure that their technical documentation is not only complete at the time of market placement but is also updated to reflect changes in the system, data drift, or lessons learned from post-market monitoring. When an RFI arrives, the ability to produce this documentation quickly and accurately is the first sign of a mature compliance culture.

Internal Governance and AI Literacy

The AI Act requires providers and deployers to ensure a sufficient level of AI literacy among their staff. This is not just a training requirement; it is an enforcement point. If an investigation reveals that staff handling an AI system lacked the necessary understanding to identify risks or comply with procedures, this can be considered a violation. National authorities will likely look for evidence of training programs, role-specific competencies, and internal governance structures that oversee AI development and use.

Post-Market Monitoring as a Risk Signal

Authorities are increasingly sophisticated in using post-market data as a trigger. A provider that fails to detect and report serious incidents, or that ignores trends in performance data indicating bias or drift, is likely to face a more severe investigation. A robust PMM system is not just a compliance checkbox; it is an early warning system that allows a provider to self-correct before an authority intervenes. In an investigation, demonstrating a proactive approach to PMM can significantly mitigate the perceived severity of a non-compliance issue.

The Role of Notified Bodies

For high-risk AI systems that require third-party conformity assessment, the Notified Body is a critical partner. However, they are also a source of information for MSAs. If a Notified Body identifies systemic issues during its assessments, it is obligated to report these to the national authority. Therefore, a poor relationship with a Notified Body, or attempts to “shop around” for a lenient one, can backfire and trigger a deeper investigation.

Conclusion: A System of Coordinated Vigilance

The enforcement of the EU AI Act is not a monolithic process driven by a single EU agency. It is a distributed, multi-layered system where national authorities, often with existing expertise in data protection, product safety, or sectoral regulation, take the lead. They are empowered with strong investigative tools and backed by a coordinating European Board that aims for consistency. For organizations operating in this space, compliance is not a one-time certification but a continuous state of readiness. The ability to respond to a formal Request for Information with comprehensive, accurate, and up-to-date technical documentation is the bedrock of defensible AI practice in Europe. The system is designed to be risk-based and proportionate, but its investigative powers are formidable, and the consequences of non-compliance, both financial and reputational, are significant.

Table of Contents
Go to Top