When National Law Takes Precedence in AI Regulation
The European Union’s Artificial Intelligence Act (AI Act) represents a landmark effort to create a harmonised legal framework for AI systems across the Union. It establishes a common set of rules, particularly concerning high-risk AI systems, foundational models, and prohibited practices, aiming to foster an internal market for trustworthy AI. However, the AI Act is not a monolithic block of legislation that entirely displaces national legal orders. It operates within a complex constitutional architecture defined by the principles of conferral, subsidiarity, and proportionality. Consequently, there are specific, legally defined scenarios where national law continues to hold significant weight, either by supplementing the EU framework where it is silent or, in rare but critical instances, by taking precedence due to overriding public interest concerns. Understanding this interplay is essential for any entity deploying or developing AI systems within the European single market.
For professionals in AI, robotics, biotech, and data systems, navigating this dual-layered regulatory environment requires a precise understanding of where the EU’s harmonised rules end and where Member State discretion begins. The relationship between the AI Act and national legislation is governed by the Treaty on the Functioning of the European Union (TFEU). Article 2 of the TFEU outlines the EU’s competences, while Article 5 details the principles of conferral and subsidiarity. The AI Act, as a Regulation, is directly applicable in all Member States (Article 288 TFEU), meaning it does not require transposition into national law. Yet, its application is not always exclusive. The Act explicitly carves out areas where Member States may maintain or introduce more specific rules, provided they are compatible with the Act’s objectives.
The Principle of Harmonisation and the Limits of Preemption
The primary objective of the AI Act is to ensure the free movement of AI systems within the internal market. To achieve this, it relies on the principle of harmonisation. When the EU harmonises a specific sector, Member States are generally prohibited from maintaining national rules that create obstacles to trade. This is known as the mutual recognition principle. If an AI system is compliant with the AI Act, a Member State cannot, in principle, ban its entry into its market simply because it prefers a different regulatory approach.
However, the AI Act is not exhaustive in regulating every aspect of AI deployment. It focuses on specific risks associated with AI systems. It does not regulate the use of AI per se, but rather the placing on the market and putting into service of AI systems. Consequently, national law remains competent in several distinct domains.
Areas of Explicit National Competence
The AI Act contains specific provisions that delineate the boundary between EU and national law. These are not loopholes but intentional design features to respect national sovereignty in areas not strictly defined as “internal market” rules for AI.
Employment and Workplace Relations
One of the most contentious and practical areas where national law takes precedence is in the regulation of AI used in employment contexts. The AI Act classifies AI systems intended to be used for recruitment or selection, particularly those that make biased decisions, as high-risk (Annex III). However, the Act regulates the technical robustness and data governance of these systems. It does not regulate the substantive labour law governing the employment relationship.
For example, the AI Act requires that high-risk AI systems used in employment be subject to strict data governance measures to minimise bias. However, it does not dictate the grounds for dismissal, the criteria for promotion, or the rights of employees to be informed about automated decision-making. These matters fall under the national labour codes of Member States.
Consider the difference between German and French approaches. In Germany, the Allgemeines Gleichbehandlungsgesetz (AGG) provides strict rules on discrimination in hiring. If an AI system filters out candidates based on protected characteristics, it violates German law, regardless of whether the AI system meets the technical requirements of the AI Act. Conversely, the French Code du travail has specific provisions regarding the right of employees to disconnect and the monitoring of employees. An AI system that monitors employee productivity using emotion recognition (which is heavily restricted under the AI Act) might be technically compliant with the Act’s risk management system but still violate French privacy and labour laws regarding surveillance. Therefore, compliance with the AI Act is a baseline, not a ceiling, for employment AI.
Administrative Procedural Law
Member States retain full control over their administrative and judicial procedures. The AI Act regulates the reliability of AI used by public authorities (e.g., in assessing eligibility for public benefits), but it does not regulate the procedural steps of how that decision is made or challenged.
If a public authority uses a high-risk AI system to determine social security benefits, the AI Act mandates that the system be robust, accurate, and subject to human oversight. However, the national law governing the process of appealing a denied benefit remains entirely domestic. For instance, the Code de la sécurité sociale in France or the Sozialgesetzbuch in Germany will dictate the timeline and form of an appeal. The AI Act does not harmonise these procedural rights. An AI provider cannot claim that the AI Act overrides national procedural rules regarding evidence or standing in court.
Subsidiarity: The “More Stringent” National Rules
Article 96 of the AI Act explicitly acknowledges that Member States may maintain or introduce more stringent rules to ensure a higher level of protection of health, safety, and fundamental rights, provided they are compatible with the Treaty and the AI Act. This is the application of the subsidiarity principle in action.
For an AI system to be placed on the market, it must comply with the AI Act. However, a Member State may decide that the Act’s safeguards are insufficient for their specific national context. They can impose additional requirements.
Example: Facial Recognition in Public Spaces.
The AI Act prohibits real-time remote biometric identification in publicly accessible spaces by law enforcement, with narrow exceptions. However, it does not regulate the use of facial recognition by private entities for purely commercial purposes (e.g., retail security) to the same degree as law enforcement, though it still classifies them as high-risk if they are used for security purposes. A Member State, citing high terrorism risks, might enact national legislation that bans all facial recognition in shopping centers, regardless of the AI Act’s classification. This national ban would take precedence because the AI Act sets a minimum level of restriction, not a maximum. Member States are free to be stricter.
Public Security, Defense, and National Exemptions
The most significant area where national law overrides EU AI regulation lies in the realm of national security. Under Article 4(2) of the Treaty on European Union (TEU), national security remains the sole responsibility of each Member State. The AI Act explicitly excludes AI systems developed or used exclusively for military, defense, or national security purposes from its scope (Article 2(3)).
This creates a “dual-use” dilemma for companies supplying AI to both the civilian and military markets. If a company develops a computer vision algorithm for autonomous driving, it falls under the AI Act. If the same company modifies that algorithm for a military drone targeting system, it falls under national defense law.
The “Safeguard” Clause: Public Order and Public Security
Even within the scope of the AI Act, Member States can invoke public order or public security to restrict the use of AI systems. This is distinct from the “national security” exemption. This usually applies to the deployment of AI systems rather than their development.
For example, an AI system for real-time translation might be lawfully placed on the market under the AI Act. However, if a Member State believes that the use of this system in sensitive government meetings poses a risk of espionage or data leakage, it can prohibit its use on national security grounds. This is often referred to as the “safeguard clause” mechanism, where a Member State informs the Commission that an AI system poses a risk to public safety or security that cannot be controlled by the Act’s conformity assessment procedures.
However, the AI Act introduces a harmonized procedure for such safeguard clauses. A Member State cannot simply ban an AI system indefinitely without review. They must notify the Commission and other Member States, and the matter may be referred to the AI Board for an opinion. This creates a check against the arbitrary use of national law to block market access, balancing national security with the internal market.
Intersection with the GDPR and ePrivacy
While the AI Act is the new lex specialis for artificial intelligence, it does not exist in a vacuum. It interacts heavily with the General Data Protection Regulation (GDPR) and the ePrivacy Directive. In this triangular relationship, national law plays a crucial role in interpreting and enforcing data protection rights, which can effectively restrict AI operations.
The AI Act focuses on the “functioning” of the AI system (safety, accuracy, bias). The GDPR focuses on the “data” used to train the system (lawfulness, purpose limitation, data minimization). If an AI system is trained on data obtained illegally under the GDPR, it cannot be considered “lawful” under the AI Act, regardless of its technical safety.
National Data Protection Authorities (DPAs), such as the CNIL in France or the BfDI in Germany, enforce the GDPR. They have the power to issue fines and order the cessation of data processing. If a DPA determines that the data scraping used to train a Large Language Model (LLM) violated national data protection laws, that LLM effectively becomes non-compliant with the AI Act because its data governance is flawed. National data protection enforcement thus acts as a gatekeeper for AI Act compliance.
The ePrivacy Directive and “Cookie” AI
The ePrivacy Directive (often called the “Cookie Law”) regulates the confidentiality of communications and the storage of information on user terminals. This is transposed into national law in varying ways.
AI systems that rely on tracking pixels, browser fingerprinting, or processing the content of electronic communications for training purposes must comply with both the ePrivacy Directive and the AI Act. If a Member State has transposed ePrivacy with stricter rules regarding consent for tracking (e.g., requiring explicit opt-in rather than implied consent), that national rule takes precedence. An AI provider cannot argue that the AI Act’s requirement for large datasets overrides the national ePrivacy requirement for explicit user consent to be tracked.
Product Safety and Sector-Specific Legislation
The AI Act is a horizontal regulation, meaning it applies across all sectors. However, many sectors have vertical, sector-specific EU regulations (e.g., Medical Device Regulation, Machinery Regulation, Automotive type-approval). Often, these regulations are updated to include AI.
When a sector-specific regulation references “software” or “AI,” it may overlap with the AI Act. The AI Act generally applies in parallel. However, if a Member State has historically had stricter safety rules for a specific product category (e.g., industrial machinery), those rules might persist if they do not conflict with the harmonised standards established by the EU.
For example, the Machinery Regulation (EU) 2023/1230 now includes specific safety requirements for AI components in machinery. If a Member State had national laws regarding the safety of robotic arms that predated the EU regulation, those national laws are superseded if they are covered by the EU regulation. However, if the EU regulation is silent on a specific safety aspect (e.g., a specific type of sensor redundancy), the Member State may maintain national rules, provided they do not hinder the free movement of goods.
The Role of CEN-CENELEC and National Standards
While not strictly “law,” national standards bodies play a pivotal role in interpreting regulations. The AI Act relies on “harmonised standards” created by European Standardization Organizations (ESOs) like CEN-CENELEC. Once these standards are referenced in the Official Journal of the EU, they create a presumption of conformity with the law.
However, the development of these standards is a national-driven process. National delegations participate in drafting. If a national delegation insists on a stricter technical requirement within a European standard (e.g., regarding the explainability of medical AI), and that requirement is adopted, it effectively becomes the standard for the whole EU. Conversely, if the EU standard is too weak for a specific national industry, that industry may push for national standards to be maintained, creating a complex landscape of “harmonised” vs. “national” technical specifications.
Liability and Civil Law
The AI Act does not harmonise civil liability. It does not dictate who pays damages if an AI system malfunctions. Liability for defective products or negligence remains governed by national civil codes.
This is a massive area where national law takes precedence. The AI Act might classify an AI system as “high-risk” and impose strict risk management obligations on the provider. However, if that system causes physical harm or economic loss, the victim must sue under national tort law.
For instance, the German Product Liability Act (Produkthaftungsgesetz) and the French Civil Code have different thresholds for proving fault and different statutes of limitations. An AI provider operating across Europe must defend itself against different liability standards in every Member State. The AI Act harmonizes the prevention of harm (safety requirements), but not the consequences of harm (compensation). This fragmentation creates significant legal risk for AI developers.
Furthermore, the proposed AI Liability Directive aims to harmonise some of these rules, specifically regarding the burden of proof. Until such a directive is adopted and transposed, national law remains the sole arbiter of AI-related damages.
Procedural Law and Evidence
In the context of “AI in the Judiciary,” the AI Act imposes strict transparency obligations. However, national procedural laws govern the admissibility of evidence.
If a court in a Member State uses an AI tool to assist in sentencing (a high-risk application), the AI Act requires that the tool be robust and transparent. However, the national Code of Criminal Procedure determines whether the output of that AI tool is admissible as evidence. Some Member States may explicitly ban AI-generated evidence in criminal cases, while others may allow it under strict conditions. The AI Act cannot override these fundamental procedural safeguards, which are often rooted in constitutional rights specific to each Member State.
Conclusion: The “Brussels Effect” vs. National Sovereignty
The AI Act represents a strong attempt at the “Brussels Effect”—where EU standards become global standards. However, the reality on the ground is that the implementation of AI regulation will be a patchwork of national interpretations and additional rules.
For practitioners, the key takeaway is that AI Act compliance is the floor, not the ceiling. To deploy AI systems successfully in Europe, one must:
- Identify the specific national laws in the target Member States regarding labour, data protection, and public safety.
- Understand that the AI Act harmonizes the market access requirements, but not necessarily the usage requirements.
- Prepare for divergent liability regimes and procedural standards in national courts.
The interplay between EU and national law is dynamic. As Member States transpose the AI Act (which requires national legislation for certain provisions, such as the designation of market surveillance authorities and penalties), we will see the emergence of “Gold Plating”—where Member States add requirements that go beyond the EU text. Monitoring these national implementation measures is as critical as understanding the Act itself.
Strategic Implications for AI Governance
Organizations must adopt a “compliance-by-design” approach that is modular enough to accommodate national variations. For example, an AI system for recruitment should be built with a “German module” that enforces AGG compliance and a “French module” that enforces CNIL guidelines, all sitting on top of the core AI Act compliance framework.
Furthermore, the role of the AI Office and the AI Board will be crucial in attempting to harmonize the application of the Act. However, these bodies cannot override national security or fundamental constitutional rights. The tension between the drive for a single digital market and the preservation of national sovereignty will define the next decade of AI regulation in Europe. Professionals must remain vigilant, treating the AI Act not as a static rulebook, but as a living framework that interacts dynamically with the diverse legal traditions of the 27 Member States.
