National Enforcement of EU AI Law
The European Union’s Artificial Intelligence Act (AI Act) represents a landmark effort to create a harmonized legal framework for AI systems, but its practical impact will be determined not in Brussels, but in the administrative offices, courtrooms, and technical laboratories of the Member States. While the AI Act establishes a Union-wide legal architecture, it is fundamentally a directive that relies on national authorities for its enforcement, supervision, and implementation. For professionals in AI, robotics, biotech, and data systems, understanding this decentralized enforcement landscape is not an academic exercise; it is a prerequisite for operational compliance and risk management. The transition from a single European regulator to a complex network of national bodies creates a multi-layered regulatory environment where legal interpretation, technical standards, and enforcement priorities will diverge and converge across the continent.
The Institutional Architecture of Enforcement
The enforcement of the AI Act is built upon a new institutional body, the European AI Office (AI Office), and a network of national competent authorities (NCAs). This structure is designed to balance centralized oversight with localized expertise. The AI Office, established within the European Commission, is tasked with coordinating the implementation of the AI Act at the European level. It will develop codes of practice, issue guidelines, and oversee general-purpose AI (GPAI) models. However, for the vast majority of AI systems placed on the market or put into service, the primary point of contact for a provider or deployer will be the NCA in the Member State where they are established or, for third-country providers, where they first enter the EU market.
This dual structure means that while the AI Office sets the strategic direction, the NCAs are the boots on the ground. They are responsible for market surveillance, conducting investigations, imposing penalties, and taking corrective actions. Each Member State is required to designate one or more NCAs and ensure they have the necessary resources, independence, and technical expertise. This is where the first major point of divergence appears: the capacity and structure of these authorities will vary significantly between a large, well-resourced national regulator and a smaller one with a more limited budget and staff. A provider of medical AI devices in Germany will interact with a different regulatory ecosystem than a provider of agricultural AI in a smaller Member State, even though the underlying law is the same.
The Role of the European AI Office
The AI Office’s primary enforcement role is focused on GPAI models. It is the central authority for assessing whether a GPAI model presents a “systemic risk” and for ensuring that providers of such models comply with obligations related to data governance, technical documentation, and incident reporting. This centralized approach is a pragmatic recognition that GPAI models are developed by a small number of large, often non-European, companies and have a cross-border impact that would be inefficient to manage through 27 separate national authorities. The AI Office will work in close cooperation with the European AI Board, which is composed of representatives from all Member States, to ensure a consistent approach.
However, the AI Office’s influence extends beyond GPAI models. It is also tasked with promoting the development of standardization tasks, supporting national authorities, and fostering an international dialogue on AI governance. For businesses, this means that while their primary regulatory relationship may be with a national NCA, the rules of the game—the guidelines, the codes of practice, the interpretation of key concepts like “high-risk” or “systemic risk”—will increasingly be shaped by the AI Office. Monitoring the output of the AI Office is therefore just as critical as monitoring the national implementation laws.
National Competent Authorities (NCAs) and Market Surveillance
NCAs are the workhorses of the AI Act’s enforcement regime. Their powers are extensive and intrusive. They have the right to access the premises of a provider or deployer, to conduct tests and audits, to request documentation and evidence, and to take interim measures, such as ordering the withdrawal of an AI system from the market if it is suspected of non-compliance. This is a significant expansion of regulatory power into the technical core of AI development and deployment.
In practice, this means that an NCA can demand access to source code, model weights, training data logs, and risk management documentation. For companies accustomed to protecting their intellectual property as a core asset, this presents a new category of regulatory risk. The AI Act provides for safeguards, such as the obligation for authorities to treat confidential information with respect, but the practical reality is that a regulatory investigation will involve deep technical scrutiny. The competence of NCAs to perform this scrutiny is a critical variable. Some countries are establishing dedicated AI testing and experimentation facilities (regulatory sandboxes) to build this capacity, while others may rely on general product safety or data protection authorities that are still building their AI expertise.
Designating and Distinguishing Market Surveillance Authorities
The AI Act does not mandate a single model for national enforcement. Member States have the flexibility to designate one or more market surveillance authorities. This has led to a patchwork of institutional arrangements across the EU, reflecting different national administrative traditions and existing regulatory structures.
In some countries, the task has been given to a single, powerful, cross-sectoral authority. For example, a national telecommunications or media regulator that already oversees complex digital markets may be a natural choice. In other countries, enforcement is fragmented among several bodies, each responsible for a specific sector. For instance, the national data protection authority (DPA) might be responsible for enforcing provisions related to the processing of personal data within AI systems, while a separate product safety agency handles the conformity assessments for high-risk AI systems used in machinery or vehicles. This fragmentation can create compliance challenges for companies that operate across multiple sectors or whose AI systems are integrated into products subject to different regulatory regimes.
Comparative Models: Germany, France, and Spain
A look at three of the largest EU Member States illustrates the diversity of approaches.
Germany has traditionally favored a decentralized model. The supervision of high-risk AI systems is expected to be distributed among various sector-specific authorities. For example, the Federal Institute for Drugs and Medical Devices (BfArM) will likely oversee AI in medical devices, while the Federal Motor Transport Authority (KBA) will oversee AI in vehicles. This approach leverages existing domain expertise but requires strong coordination mechanisms to ensure a consistent interpretation of the AI Act across sectors. The German Federal Ministry for Economic Affairs and Climate Action (BMWK) plays a central role in coordinating these efforts at the federal level.
France is leveraging the existing structure of its data protection authority, the CNIL (Commission nationale de l’informatique et des libertés). The CNIL has a strong reputation and deep experience in regulating automated decision-making and data processing, which are central to many AI systems. It is likely to be a key market surveillance authority, particularly for AI systems used by public and private entities that process personal data. This approach centralizes expertise in a single, well-respected body, potentially offering a more predictable enforcement environment for providers whose systems are heavily reliant on personal data.
Spain has taken a proactive step by establishing a new, dedicated national agency: the Spanish Agency for the Supervision of Artificial Intelligence (AESIA). This is the first national agency in the EU created specifically for AI oversight. AESIA is designed to be a cross-sectoral authority, combining expertise in technology, ethics, and law. It aims to provide a single point of contact for AI providers and deployers in Spain, simplifying the regulatory landscape. The creation of AESIA signals a belief that AI regulation requires a specialized, forward-looking approach that cannot be fully accommodated within existing, legacy institutions.
These differing models mean that a company’s regulatory experience will be highly dependent on its geographic location and the sector it operates in. A provider of AI-powered HR software might face a very different set of questions and expectations from the French CNIL compared to a provider of the same software in Germany, where the relevant NCA might be a more specialized labor market authority.
The Critical Role of Notified Bodies
For high-risk AI systems listed in Annex III of the AI Act (e.g., in biometrics, critical infrastructure, employment, and law enforcement), conformity assessment is a key gatekeeper. This assessment is typically conducted by the provider themselves (the “internal control” module). However, for certain high-risk AI systems, particularly those used as safety components of products already covered by other EU harmonization legislation (like medical devices, machinery, or aviation), the involvement of a third-party conformity assessment body, known as a Notified Body, is mandatory.
Notified Bodies are designated by Member States and are the same institutions that already perform conformity assessments for CE-marked products. Their role under the AI Act is to assess whether the provider’s quality management system, risk management system, and technical documentation comply with the AI Act’s requirements. This is a crucial link between the new AI-specific rules and the existing “New Legislative Framework” for products in Europe. The capacity and availability of Notified Bodies with the necessary expertise in AI will be a significant bottleneck. Unlike market surveillance authorities, which are public bodies, Notified Bodies can be private organizations. Their availability, cost, and technical competence will become a competitive factor for providers seeking to place high-risk AI products on the EU market.
The Enforcement Toolkit: From Sandboxes to Sanctions
The AI Act provides a graduated scale of enforcement tools, designed to encourage compliance rather than simply punish non-compliance. National authorities are expected to use these tools in a proportionate manner, but the threat of severe penalties is a powerful motivator.
Regulatory Sandboxes and Real-World Testing
One of the most innovative enforcement mechanisms is the regulatory sandbox. A sandbox is a controlled environment, established by a national authority, where an innovative AI system can be tested and developed under regulatory supervision before it is placed on the market. The purpose is to provide legal certainty for startups and SMEs, allowing them to experiment with new technologies without the immediate fear of non-compliance. Within a sandbox, providers can get guidance from the NCA on how to interpret the AI Act’s requirements for their specific product.
However, it is important to understand the limits of a sandbox. Entry into a sandbox is not a guarantee of future compliance, and any data used must be handled in accordance with data protection law (which is not suspended). Furthermore, sandboxes are managed at the national level. This means the application process, the conditions for entry, and the level of guidance provided will differ from one Member State to another. A provider in a country with a well-established, well-funded sandbox program may gain a significant competitive advantage over a provider in a country where the sandbox is more of a formality.
Investigations, Audits, and Corrective Measures
If an NCA suspects non-compliance, it can launch an investigation. This can be triggered by a complaint, by routine market surveillance, or by a “serious incident” report from the provider itself. The NCA has the power to conduct audits, which may include requesting the provider to run specific tests on their system and provide the results, or even conducting physical inspections of servers and facilities.
If non-compliance is confirmed, the NCA has a range of corrective measures it can take, applied in a graduated fashion. These include:
- Requiring the provider to bring the AI system into compliance within a specific timeframe.
- Requesting the provider to recall the AI system from the market.
- Requesting the provider to withdraw the AI system from the market.
- Imposing temporary or permanent prohibitions on the use of the system.
- In the case of high-risk AI systems, requiring the immediate cessation of its use.
These measures can be devastating for a business, not only in terms of direct financial loss but also reputational damage. The decision to take such measures rests with the national authority, and their willingness to do so will be a key indicator of the enforcement culture in a given country.
Administrative Fines and Penalties
The AI Act’s penalty structure is designed to be a strong deterrent, with maximum fines that are aligned with the GDPR to ensure consistency. The levels of fines are tiered based on the severity of the infringement:
- Up to €35 million or 7% of total worldwide annual turnover (whichever is higher) for violations of the banned AI practices.
- Up to €15 million or 3% of total worldwide annual turnover for violations of the obligations on high-risk AI systems and other specific requirements.
- Up to €7.5 million or 1.5% of total worldwide annual turnover for the supply of incorrect, incomplete, or misleading information to notified bodies or national authorities.
The decision to impose a fine, and its amount, is made by the national authorities. While the AI Act provides the legal framework, the actual calculation of a fine will involve national discretion, considering factors like the nature, gravity, and duration of the infringement, and the economic capacity of the offender. This introduces a significant element of national variability. A company might face a different fine for the same infringement in two different Member States, depending on the local authority’s fining policy and the company’s local turnover.
Procedural Rights and Cross-Border Cooperation
Enforcement is not just about power; it is also about procedure. The AI Act includes provisions on the procedural rights of providers and deployers, ensuring a degree of due process. Before a NCA takes a definitive measure (like a fine or a withdrawal order), it must inform the provider and give it an opportunity to submit its views. This right to be heard is fundamental to a fair regulatory process.
When an AI system is supplied across multiple Member States, a single non-compliant product can trigger enforcement actions in several countries simultaneously. To manage this, the AI Act establishes a system of “union safeguard procedures.” If a NCA in one country identifies a non-compliant AI system that is also available in other Member States, it must inform the AI Office and the other NCAs. The AI Office and the NCAs then have a period to raise objections. If no objections are raised, the initial NCA can proceed with its measures, which are then recognized and enforced across the entire EU. If there are objections, the matter is escalated to the European AI Board for a decision. This mechanism is designed to prevent “regulatory shopping,” where a provider might try to argue that their system is compliant based on the interpretation of a single, lenient NCA.
The Interface with Other Regulatory Frameworks
The AI Act does not exist in a vacuum. National enforcement must be coordinated with other EU and national laws. The most significant interface is with the General Data Protection Regulation (GDPR). Many high-risk AI systems process personal data, and their compliance with the AI Act is often contingent on their compliance with the GDPR. National DPAs, which are the primary enforcers of the GDPR, will therefore be key players in the AI enforcement ecosystem. A finding by a DPA that an AI system’s data processing is unlawful will almost certainly have implications for its compliance with the AI Act’s requirements for data governance and quality.
Similarly, for AI systems that are components of products (e.g., AI in a car or a medical device), the market surveillance authorities for those products will be the lead enforcers. This requires close cooperation between the AI-specific NCAs and the product-specific NCAs. In some Member States, these may be the same authority; in others, they will be different. For businesses, this means navigating a web of overlapping regulatory responsibilities, ensuring that their compliance efforts satisfy the requirements of all relevant authorities.
Practical Implications for Businesses: A National Compliance Strategy
For a company developing or deploying AI in Europe, a purely EU-centric compliance strategy is insufficient. A national compliance strategy is required. This involves:
- Identifying the relevant national authorities: A provider of hiring software must know not only the national AI NCA but also the national labor inspectorate and data protection authority. A provider of AI for medical diagnostics must understand the national medicines agency and the national product safety body.
- Monitoring national implementation: The AI Act allows for some national discretion, particularly in areas like law enforcement and public services. Companies must monitor the national legislation that transposes the AI Act into local law, as well as any guidelines issued by their national authorities.
- Engaging with national regulators: Proactive engagement is key. Participating in public consultations on national guidelines, applying for entry into regulatory sandboxes, and seeking informal guidance can help build a constructive relationship with the regulator and provide valuable legal certainty.
- Preparing for national audits: The technical documentation and quality management systems required by the AI Act must be prepared with the expectation of scrutiny by a national NCA. This means ensuring that documentation is not only complete but also understandable to a non-technical auditor who may be unfamiliar with the specific nuances of the AI technology.
The enforcement of the AI Act will be a dynamic process. The first few years will be a period of learning for both regulators and the industry. The initial enforcement actions taken by NCAs will set important precedents and signal national priorities. Some authorities may focus on high-profile consumer-facing applications, while others may prioritize AI systems in critical infrastructure or public administration. The emergence of different enforcement “personalities” across the Member States is inevitable. For professionals at the intersection of technology and regulation, success will depend not only on understanding the letter of the law but also on anticipating the practical reality of its enforcement in the diverse and complex landscape of the European Union.
