Country-Specific Employment and Education Rules Affecting AI
The European Union’s Artificial Intelligence Act (AI Act) establishes a harmonised legal framework intended to function as the primary regulatory baseline for AI systems placed on the Union market. However, for professionals deploying AI in sectors such as education, employment, and healthcare, the Act represents only the surface layer of compliance. Beneath this supranational regulation lies a complex web of national legislation, sector-specific directives, and established regulatory practices that impose significant additional constraints. Understanding the interplay between the AI Act and these domestic frameworks is not merely a matter of legal diligence; it is a prerequisite for operational viability. This article analyzes how local sector rules in education, employment, and health interact with EU-level AI guidance, highlighting where national implementations diverge and impose stricter obligations.
The Architecture of Shared Competence
To understand the regulatory environment, one must first recognize the EU’s legal architecture. The AI Act harmonises the internal market aspects of AI—safety, fundamental rights, and market access. However, the Treaties of the European Union reserve specific policy areas for Member States. These include education, public health (organisation and delivery of services), and employment relations. Consequently, an AI system may receive a conformity assessment under the AI Act by a notified body, only to face a ban or severe restrictions when deployed in a specific national context due to local laws.
For example, while the AI Act regulates the placing on the market of AI systems, it does not regulate the use of AI by public authorities for the provision of public services in the same granular way national laws do. A “high-risk” AI system used for hiring might be CE-marked under the AI Act, but its deployment in Sweden might be blocked by the Swedish Discrimination Act if it cannot be proven to satisfy non-discrimination requirements that go beyond the AI Act’s “fundamental rights impact assessment.”
Education: Data Privacy, Pedagogy, and Surveillance
The education sector is perhaps the most fragmented regarding AI regulation. The AI Act classifies AI systems intended to be used for the purpose of determining access or assigning educational institutions (e.g., examination proctoring, student evaluation) as high-risk. However, the Act primarily focuses on the technical reliability and risk management systems of the provider. It does not harmonise the pedagogical or ethical standards of how these tools are used.
Proctoring and Biometric Identification
Many AI-driven proctoring tools rely on biometric categorization and emotion recognition. Under the AI Act, emotion recognition systems are prohibited in the workplace and educational institutions, with a narrow exception for safety or medical reasons (e.g., detecting a driver’s drowsiness). This prohibition is absolute at the EU level. However, even where the AI Act permits certain high-risk uses (like student assessment), national laws often intervene.
In the Netherlands, for instance, the Wet op de onderwijsinspectie (Education Inspection Act) and strict interpretations of the GDPR by the Dutch Data Protection Authority (AP) have historically made the broad deployment of remote proctoring difficult. Universities in the Netherlands have faced legal challenges requiring them to prove that less invasive measures were insufficient before deploying biometric surveillance. This creates a “privacy by design” requirement that is stricter than the AI Act’s general requirements.
Conversely, in France, the National Commission for Computing and Liberties (CNIL) has issued specific guidelines on exam proctoring. While acknowledging the high-risk nature, the CNIL emphasizes the principle of proportionality. A key difference here is the French requirement for a délai de conservation (retention period) that is strictly limited, often requiring the immediate deletion of biometric data after the exam. The AI Act requires robust data governance but does not set specific retention timelines, leaving this to national privacy laws.
Algorithmic Guidance vs. Human Agency
AI systems used to recommend educational pathways or predict student success are increasingly common. The AI Act categorizes these as high-risk if they influence educational outcomes. However, national education laws often enshrine the professional autonomy of teachers.
In Germany, the Kultushoheit (cultural sovereignty) of the Länder (federal states) governs education. An AI system that suggests a student should attend a Gymnasium (academic track) rather than a Realschule (vocational track) would face immense scrutiny. German law requires that such decisions be made by educational professionals. An AI tool cannot legally make the final determination; it can only support the human decision. If the AI tool is not transparent enough to allow a teacher to understand why a recommendation was made, it violates the national requirement for traceability of administrative decisions, a concept known as Verwaltungsakt (administrative act) transparency.
Employment: The Intersection of Labour Law and Algorithmic Management
The AI Act explicitly targets AI systems used for recruitment and selection (CV sorting, interview analysis) and for promotion/termination decisions as high-risk. While the Act mandates human oversight, data governance, and robustness, it stops short of regulating the employment relationship itself. This is the domain of national labour law, which is currently undergoing a rapid transformation to address “algorithmic management.”
The Platform Work Directive and National Transposition
While not strictly an AI regulation, the Platform Work Directive (adopted in 2024) creates a legal presumption of an employment relationship when certain control criteria are met. Many of these criteria are exercised via AI algorithms (e.g., monitoring performance, setting pay, supervising tasks). Member States are currently transposing this into national law.
Spain has been a frontrunner with its Ley Rider (Rider Law), which presumes delivery riders are employees if their work is organized by algorithms. This national law imposes a transparency obligation on algorithms that is arguably more specific than the AI Act. Under the Ley Rider, companies must inform workers about how algorithms affect their pay and working conditions in a way that is understandable. The AI Act requires “human oversight,” but Spanish labour law requires effective contestability by the worker in a labour court context.
Algorithmic Bias and Discrimination Law
The AI Act requires providers to mitigate bias in high-risk systems. However, national anti-discrimination laws often impose a much higher burden of proof on the employer.
In the United Kingdom (post-Brexit), the Equality Act 2010 governs discrimination. If an AI recruitment tool disproportionately rejects female candidates, the burden may shift to the employer to prove the tool is a “proportionate means of achieving a legitimate aim.” This is a legal test that goes beyond the technical documentation required by the AI Act. An AI system might be technically compliant with the AI Act (i.e., it has a risk management system and data governance) but still be illegal to use in the UK because it cannot satisfy the specific proportionality test required by UK equality law.
In Belgium, the Collective Bargaining Agreement (CBA) No. 104 regarding the protection of employees’ privacy regarding monitoring systems (including automated monitoring) imposes strict consultation requirements with trade unions before deploying AI tools that monitor employees. This is a procedural constraint that exists outside the AI Act but is mandatory for deployment.
Healthcare: The Dual-Regulation Trap
The healthcare sector is unique because AI systems often fall under two parallel regulatory regimes: the AI Act and the Medical Devices Regulation (MDR) / In Vitro Diagnostic Regulation (IVDR). The AI Act explicitly defers to the MDR/IVDR for safety risks, but national health authorities add a third layer regarding the organisation of healthcare services.
Clinical Decision Support vs. Autonomous Diagnosis
The AI Act prohibits AI systems that manipulate human behavior to circumvent free will or exploit vulnerabilities (Article 5). In a clinical setting, this interacts with national laws on informed consent.
In France, the Loi Kouchner establishes the patient’s right to be informed and to consent to care. If an AI system recommends a specific treatment, and a doctor follows it without fully understanding the AI’s reasoning (automation bias), and the patient suffers harm, the legal liability is complex. French jurisprudence on medical liability is strict. The AI Act’s requirement for “human oversight” is interpreted in France as requiring the doctor to be able to explain the medical decision independently of the AI. If the AI is a “black box,” the doctor cannot fulfill their legal duty of explanation to the patient, rendering the AI unusable.
Data Access and Public Health Sovereignty
AI models, particularly in healthcare, require vast amounts of data. The AI Act encourages the use of “synthetic data,” but high-quality models often need real patient data. National health data laws often restrict this more tightly than the GDPR or the AI Act.
In Italy, the Garante per la protezione dei dati personali has taken a hard line on the processing of health data for AI training. Following the ban of ChatGPT in 2023 (temporarily), the regulator emphasized that data used for training must be strictly necessary and proportionate. Italian law requires that health data processing be authorized by the Ministry of Health or the regional health authorities, creating a bureaucratic layer that AI providers must navigate before they can even begin to comply with the AI Act’s data governance requirements.
In Germany, the Digital Healthcare Act (DVG) regulates the integration of digital health applications (DiGAs) into the healthcare system. To be prescribed by doctors and reimbursed by statutory health insurance, an AI tool must undergo a fast-track assessment that focuses on positive healthcare effects. This is a market-access requirement driven by national health policy that prioritizes clinical utility over the AI Act’s focus on safety and fundamental rights. An AI system could be safe and compliant with the AI Act but fail to get reimbursement in Germany because it does not demonstrate a measurable improvement in patient outcomes under the DVG framework.
Public Procurement and Accountability
When AI is purchased by public authorities (e.g., a municipality buying an AI system for social benefit allocation), the procurement rules become a critical regulatory constraint.
The EU Public Procurement Directive allows for technical specifications that require specific characteristics. However, national implementations vary. In the Netherlands, the Commissie van Aanbestedingsexperts (Procurement Experts Commission) has ruled that public bodies must ensure that AI systems used in decision-making are sufficiently transparent to allow for legal recourse. If a municipality cannot explain why an AI system denied a benefit, it may violate the Dutch Algemene Wet Bestuursrecht (General Administrative Law Act).
This creates a requirement for “explainability” that is not explicitly defined in the AI Act’s technical annexes but is required by national administrative law. A provider might find that their AI system, while CE-marked, is rejected in a public tender in Finland or Denmark because the procurement specifications demand a level of explainability (e.g., “local interpretable model-agnostic explanations” or LIME) that the provider’s “black box” model cannot offer.
Conclusion: The Necessity of a Layered Compliance Strategy
The European AI regulatory landscape is not a monolith but a mosaic. The AI Act provides the frame, but the picture is filled in by national laws that reflect deep-seated cultural, legal, and administrative traditions. For AI practitioners in robotics, biotech, and data systems, compliance cannot end with the technical documentation required for CE marking.
Success in the European market requires a “regulatory stack” approach. One must assess the AI Act’s risk classification, but then overlay that with sector-specific laws in the target Member State. Does the national education law require pedagogical transparency? Does the labour code require trade union consultation? Does the health authority require specific clinical utility proofs?
Ignoring these local constraints is a significant risk. It can lead to products that are technically legal to sell but practically impossible to deploy. The future of AI regulation in Europe will be defined by how these national interpretations evolve, likely creating a fragmented compliance landscape that requires sophisticated legal and technical due diligence for years to come.
