When National Law Overrides EU Guidance: Practical Scenarios
The relationship between European Union law and the national legal systems of Member States is a foundational principle of the EU legal order, yet its application in the rapidly evolving domain of artificial intelligence, robotics, and data-driven biotechnology is fraught with practical complexities. While the EU strives for harmonization to create a single market, national legislators and regulators often retain or assert authority in specific areas, leading to situations where EU-level guidance, recommendations, or even soft law is effectively superseded by binding national legislation or procedural requirements. This dynamic is not necessarily a sign of dysfunction; it is an inherent feature of the EU’s constitutional architecture, designed to balance Union-wide objectives with national sovereignty. However, for professionals deploying high-risk AI systems, operating autonomous robots, or processing sensitive health data, understanding these friction points is critical for compliance and operational viability.
This analysis explores the mechanisms and practical scenarios where national law or administrative procedures take precedence over EU-level frameworks. We will dissect the legal hierarchy, examine concrete conflict patterns in sectors governed by the AI Act, the GDPR, and the Medical Devices Regulation, and propose mitigation strategies for organizations navigating this dual-layered regulatory environment. The focus is on the operational reality: how a directive from Brussels is interpreted, implemented, and sometimes fundamentally altered by the legal and administrative practices in Berlin, Paris, or Warsaw.
The Constitutional Framework: Primacy, Subsidiarity, and Harmonization
To understand where national law can override EU guidance, one must first grasp the core principles of EU law. The doctrine of primacy dictates that where a conflict arises between EU law and the national law of a Member State, EU law must prevail. This applies to regulations, which are directly applicable, and to directives, which must be transposed into national law but have a ‘direct effect’ if not properly implemented. However, the reality is more nuanced than a simple hierarchy. The principle of subsidiarity allows Member States to legislate where the EU has not exercised its competence, or where the EU has decided to set only a regulatory floor, leaving Member States free to introduce or maintain more stringent rules.
Many key EU instruments in the tech and life sciences sectors are harmonizing regulations. They aim to replace the patchwork of national laws with a single set of rules. The General Data Protection Regulation (GDPR) and the AI Act are prime examples. Yet, these regulations contain numerous opening clauses and derogations. These clauses explicitly permit Member States to legislate on specific aspects, creating deliberate gateways for national law to operate. Furthermore, EU ‘guidance’—such as opinions from the European Data Protection Board (EDPB), Q&A documents from the European Commission, or standards adopted by European Standardization Organizations (ESOs)—is often not legally binding in itself. It provides an interpretation framework, but it does not create new legal obligations. A Member State can choose to codify a different interpretation into its national law, creating a direct conflict for entities operating within its territory.
Legal Hierarchies and the “Regulatory Floor” Concept
The AI Act, for instance, establishes a harmonized framework for AI systems. Its primary goal is to ensure a functioning internal market by setting a baseline of safety and fundamental rights protections. The Act explicitly states that it does not prevent Member States from maintaining or introducing more stringent rules to ensure a higher level of protection, provided they are compatible with the Act and notified to the Commission. This creates a “regulatory floor,” not a ceiling. An AI system that is compliant with the AI Act may still be illegal in a specific Member State if that state has exercised its right to impose stricter requirements.
This principle is distinct from situations where national law overrides EU guidance due to a lack of harmonization. In areas where the EU has not legislated, or where it has issued non-binding guidance, national law is the sole source of legal obligation. The critical task for a compliance officer is to map the applicability of both layers and identify the points of intersection and potential conflict.
Scenario 1: The GDPR and Employee Monitoring
The GDPR is a regulation that is directly applicable across the EU. However, it contains numerous opening clauses that allow Member States to tailor its application to specific contexts. A classic area of conflict between EU-level guidance and national law is the processing of employee data for monitoring purposes.
EU-Level Guidance and the ePrivacy Directive
At the EU level, the GDPR sets the principles for lawful processing: consent, contract, legal obligation, vital interests, public task, or legitimate interests. For employee monitoring, ‘consent’ is problematic due to the inherent power imbalance in the employment relationship. The European Data Protection Board (EDPB) has issued guidance suggesting that monitoring must be proportionate, transparent, and necessary for legitimate interests pursued by the employer, which are not overridden by the employee’s rights and freedoms. The ePrivacy Directive (which is a directive, not a regulation, and thus requires national transposition) further protects the confidentiality of communications.
National Implementation and Override
Member States have transposed the ePrivacy Directive and interpreted the GDPR’s opening clauses in vastly different ways.
- Germany: German law, particularly the Bundesdatenschutzgesetz (BDSG-new), provides highly specific and restrictive rules on employee monitoring. It generally requires a works council’s agreement and limits monitoring to specific, justified cases. An employer’s “legitimate interest” under GDPR is interpreted very narrowly in Germany, effectively overriding a more permissive interpretation of the GDPR’s balancing test.
- France: The French CNIL (Commission Nationale de l’Informatique et des Libertés) has issued strict guidelines and authorizes certain types of monitoring only under very specific conditions, often requiring prior consultation. The French Labour Code also imposes significant constraints.
- United Kingdom (as a non-EU example, but illustrative of post-Brexit divergence): The UK’s ICO (Information Commissioner’s Office) has issued its own guidance on monitoring which, while similar in principle, has different nuances and practical applications compared to EU EDPB guidance.
In practice, a company using an AI-powered productivity monitoring tool that is deemed compliant with the EDPB’s general principles might find itself in violation of German national law. The national law does not just ‘add detail’; it effectively sets a higher, more prescriptive standard that overrides the general balancing test suggested at the EU level. The mitigation here is to always consult the national implementing legislation and the local Data Protection Authority’s (DPA) specific guidance before deploying such systems.
Key Takeaway: When processing employee data, the GDPR’s general principles are just the starting point. National transpositions of the ePrivacy Directive and specific clauses in national data protection laws create a stricter, binding layer that overrides general EU-level interpretations of ‘legitimate interest’ or ‘balancing tests’.
Scenario 2: The AI Act and Biometric Identification in Public Spaces
The AI Act represents the most ambitious attempt at harmonization in the field of AI. It categorizes AI systems by risk and imposes strict obligations on ‘high-risk’ systems. One of the most controversial areas is the use of remote biometric identification (RBI) systems, particularly real-time RBI in publicly accessible spaces.
The EU Compromise
The AI Act establishes a near-total ban on real-time RBI in public spaces for law enforcement purposes. However, it provides a set of narrow, exhaustive exceptions. These exceptions allow for their use only for specific, serious purposes (e.g., preventing a specific and imminent terrorist threat, searching for victims of abduction), subject to judicial authorization and strict safeguards. This was a hard-fought political compromise at the EU level.
National Security and Public Order: The Carve-Out
Here, the override is not just a matter of stricter rules, but of fundamental scope. The AI Act, like the GDPR, explicitly excludes activities concerning national security, defence, and public security from its scope (Article 2, point 6). While the Act aims to regulate AI systems used by private actors and public authorities other than those specifically for national security, the line is blurry.
Member States have broad discretion in defining what constitutes a threat to national security. This allows them to deploy RBI systems under national security legislation that falls outside the AI Act’s safeguards. For example, a national law might authorize the use of RBI for broad “public order” monitoring during large-scale events, an application that would be illegal under the AI Act’s strict, exception-based framework for law enforcement.
Consider the case of a municipality wanting to deploy an AI system for crowd management and identifying individuals on a watchlist during a major sporting event. Under the AI Act, this would likely be considered real-time RBI for law enforcement and would be prohibited unless it meets the narrow exceptions. However, if the national government frames this activity under a “public security” or “prevention of crime” mandate that is carved out or subject to a different national legal basis, the AI Act’s prohibitions may not apply directly. The national law, in this case, creates a parallel legal universe for security applications, effectively overriding the spirit and letter of the AI Act’s harmonization efforts.
Another layer is the use of RBI by private entities, such as retail stores or stadiums. The AI Act classifies RBI systems as high-risk if used for access control or biometric categorization. However, national laws on surveillance and private security can impose additional, and sometimes conflicting, requirements. For instance, some countries may mandate the use of such systems for security reasons (e.g., in critical infrastructure), while others may ban them outright under data protection or privacy laws that predate the AI Act.
Scenario 3: Medical Devices and National Peculiarities in Clinical Investigations
The Medical Devices Regulation (MDR) and the In Vitro Diagnostic Regulation (IVDR) are harmonizing regulations designed to ensure a high level of safety and health for patients and users, while facilitating the free movement of medical devices within the EU. They lay down rules concerning the clinical evaluation, conformity assessment, and post-market surveillance of devices.
Harmonized Rules for a Single Market
The MDR/IVDR aim to replace divergent national rules. They specify requirements for clinical investigations, including informed consent, data protection, and the justification of the investigational device’s safety and performance. A manufacturer conducting a clinical investigation in accordance with the MDR should, in principle, be able to do so across the EU.
National Ethics Committees and Data Protection Laws
In practice, the path to market involves navigating two layers of national oversight that can effectively override or significantly delay the harmonized EU framework:
- Ethics Committee Approvals: The MDR requires that clinical investigations be approved by an ethics committee. While the regulation sets criteria for the committee’s assessment, the procedures, timelines, and specific interpretations of ethical principles (e.g., on vulnerable populations, compensation for injury) are determined at the national level. A manufacturer might receive MDR-compliant approval from an ethics committee in one country (e.g., Spain) but face a much more stringent and lengthy approval process in another (e.g., Germany), where the committee applies additional national ethical guidelines or imposes more demanding data protection requirements.
- National Data Protection Laws: The GDPR is a regulation, but it allows Member States to introduce specific provisions for the processing of health data for scientific research purposes. Many Member States have done so. These national laws can specify different rules on the legal basis for processing (e.g., whether ‘legitimate interest’ is ever permissible for research), the conditions for re-consent, or the requirements for data anonymization. A clinical trial data management plan that is fully GDPR-compliant and approved by a DPA in one country may be rejected by a DPA in another country because of these national derogations. The national DPA’s interpretation, backed by national law, overrides the general principles and guidance from the EDPB.
A concrete example: A company developing an AI-based diagnostic tool for radiology wants to run a multi-center clinical trial across five EU countries. The MDR provides the overarching legal framework for the trial’s conduct. However, the company must submit separate applications to five different national ethics committees and five different national DPAs. The German ethics committee might require additional insurance coverage beyond the MDR’s baseline. The French DPA might insist on data being stored on servers physically located within France, despite the GDPR’s provisions on free data flows within the EU. These national requirements, while not contradicting the MDR or GDPR directly, create a fragmented and complex compliance landscape where the harmonized EU framework is effectively overlaid by a patchwork of stricter national rules.
Mitigation Strategies for Practitioners
Operating in an environment where national law can override or supplement EU guidance requires a proactive and multi-faceted compliance strategy. A “one-size-fits-all” EU-centric approach is insufficient.
1. Jurisdictional Mapping and “Country-Specific” Compliance
Before deploying a new technology or system, especially one involving high-risk AI, sensitive data, or biometrics, conduct a thorough jurisdictional mapping. This involves identifying not only the applicable EU regulations but also the specific national implementing laws, relevant court decisions, and the established practices of national regulators (DPAs, market surveillance authorities, ethics committees) in each country of operation. This is not a one-time task; national laws and interpretations evolve.
2. Engagement with National Regulatory Sandboxes
The AI Act and other regulations promote the use of regulatory sandboxes. These are controlled environments where companies can test innovative technologies under the supervision of regulators. Engaging with a national sandbox is an excellent way to get clarity on how a national authority will interpret EU rules in a specific context. The guidance received within a sandbox, while not legally binding in general, provides a strong indication of the regulator’s future position and can help mitigate enforcement risks.
3. Distinguishing Between Guidance and Law
Organizations must develop the internal legal expertise to distinguish between legally binding EU law (regulations, directives, and binding court judgments), non-binding EU guidance (opinions, recommendations), and binding national law. It is a common mistake to either over-comply with non-binding guidance or to ignore it when it signals a national authority’s expected interpretation. For example, an EDPB opinion on cross-border data transfers is not law, but it informs how every national DPA will enforce the GDPR. Ignoring it is a high-risk strategy.
4. Designing for Compliance and Flexibility
System architects and developers should design systems with regulatory flexibility in mind. This means building modularity into systems to accommodate different national requirements. For example, a data processing system should be able to handle different consent management flows or data localization requirements based on the user’s jurisdiction. For AI systems, this might mean designing for the strictest possible use case (e.g., avoiding real-time RBI entirely) to ensure market access across the entire EU, even if a less stringent interpretation might be possible in a specific country.
5. Proactive Dialogue and Legal Challenges
When faced with a national requirement that appears to be a clear and unjustified override of harmonized EU law, companies should not hesitate to engage in dialogue with the national authority, referencing the relevant EU provisions and the principle of primacy. In cases of persistent conflict, the ultimate recourse is a preliminary ruling from the Court of Justice of the European Union (CJEU). While this is a lengthy and costly process, it is the definitive mechanism for resolving disputes over the interpretation and validity of EU law versus national law.
The interplay between EU and national law is a dynamic and defining feature of the European regulatory landscape. For innovators and operators in the tech and biotech sectors, success depends not only on understanding the high-level EU framework but also on mastering the intricate details of its national implementation and the points where the two diverge.
