< All Topics
Print

Enforcement Without Court: Orders, Warnings, and Remedies

European regulatory enforcement for high-risk technologies is increasingly characterised by a shift away from purely punitive, post-harm litigation toward proactive, structural intervention. While the judiciary remains the ultimate arbiter of rights and liabilities, the daily reality of compliance is shaped by administrative actors—supervisory authorities, regulators, and market surveillance bodies—who wield a growing arsenal of tools to correct behaviour, reshape product architectures, and supervise ongoing operations. This dynamic is particularly pronounced in the domains of artificial intelligence, robotics, biotech, and data-driven systems, where harms can be systemic, fast-moving, and difficult to reverse once they materialise. Understanding the spectrum of enforcement tools beyond court decisions is therefore essential for any entity operating within the European regulatory ecosystem.

The European Union’s regulatory design deliberately equips administrative authorities with mechanisms that can operate faster and more flexibly than traditional court proceedings. These tools include corrective orders, administrative fines, negotiated remedies, and structured patterns of ongoing supervision. They are embedded in sectoral frameworks such as the General Data Protection Regulation (GDPR), the AI Act, the Medical Device Regulation (MDR), the Digital Services Act (DSA), and the Digital Markets Act (DMA), as well as in cross-cutting regimes like the NIS2 Directive and the Cyber Resilience Act. While the underlying principles of legality, proportionality, and due process constrain these powers, their practical application often determines whether a company can continue to place a product on the market or must withdraw, redesign, or suspend a service.

Administrative Enforcement as the Default Mode

Administrative enforcement is the primary engine of regulatory compliance in Europe. National supervisory authorities—such as data protection authorities (DPAs), market surveillance authorities for products, and financial regulators—act as the frontline decision-makers. They investigate complaints, conduct audits, and issue binding decisions. The European Commission and EU agencies like the European Data Protection Board (EDPB) or the European Medicines Agency (EMA) coordinate or directly enforce in specific domains. The result is a layered system where EU-level harmonised rules are implemented and enforced by national bodies, creating a network of overlapping competencies and, at times, divergent practices.

Administrative decisions can impose immediate obligations on controllers, manufacturers, or service providers. These obligations often target the modus operandi of an organisation: how data is processed, how an AI system is tested and monitored, or how a device’s safety is maintained. The emphasis is on corrective action and ongoing compliance rather than solely on monetary penalties. This approach reflects the recognition that, in complex technological environments, a fine alone may not prevent recurrence; structural changes to processes and governance are required.

Legal Basis and Procedural Safeguards

Every administrative measure rests on a statutory foundation. In the GDPR, Article 58 confers investigative and corrective powers on DPAs, including the ability to issue warnings, reprimands, and orders to bring processing into compliance. The AI Act (Regulation (EU) 2024/1689) grants market surveillance authorities similar powers, including the imposition of restrictions on the making available on the market of high-risk AI systems and the requirement to take corrective measures. The MDR and IVDR empower Notified Bodies and national authorities to require corrective actions, impose restrictions, or withdraw certificates.

These powers are subject to procedural guarantees. The right to be heard, the duty to state reasons, and the requirement for decisions to be based on clear evidence are constitutional principles and explicit statutory requirements. In cross-border cases involving multiple Member States, the cooperation mechanisms under the GDPR’s Article 60 or the AI Act’s provisions on supervision of high-risk AI systems listed in Annex II (where market surveillance authorities coordinate through the AI Office) add complexity. The process can involve weeks or months of consultation, during which the affected entity may present arguments, propose remedies, and negotiate the scope of orders.

Distinction Between Warnings, Orders, and Fines

Warnings and reprimands serve as formal notice of non-compliance without immediate legal or financial consequences. They are recorded and can influence future enforcement decisions, particularly regarding recidivism or the seriousness of a breach. Orders to bring processing into compliance or to cease non-compliant activities are more forceful. They specify concrete steps—such as deleting data, updating technical documentation, modifying an algorithm, or conducting a new conformity assessment—and set deadlines.

Fines operate on a different level. Under GDPR, fines can reach up to 4% of global turnover or €20 million, whichever is higher. The AI Act introduces administrative fines up to €35 million or 7% of global turnover for certain infringements. However, authorities often prefer to combine corrective orders with lower fines or even to forego fines where the operator demonstrates genuine cooperation and a robust remediation plan. This reflects a pragmatic calculus: deterrence is important, but ensuring systemic change is often the more valuable outcome.

Corrective Orders: The Workhorse of Enforcement

Corrective orders are the most frequently used tool for bringing complex systems into compliance. They are tailored to the specific risk and context, ranging from data protection impact assessments and privacy-by-design adjustments to technical modifications in AI systems and updates to clinical evaluation reports for medical devices. The flexibility of corrective orders makes them suitable for addressing novel risks that do not fit neatly into predefined categories.

Scope and Content of Orders

A corrective order typically identifies the legal provision violated, the factual findings, the required remedial actions, and the timeframe for implementation. It may also mandate reporting obligations, such as periodic updates to the authority on progress and post-market surveillance data. In the context of AI, orders might require the retraining of models with lawfully sourced data, the introduction of human oversight mechanisms, or the implementation of robust logging and traceability features to meet the AI Act’s transparency and accountability requirements.

For biotech and medical devices, orders can require changes to manufacturing processes, updates to labelling, or the initiation of field safety corrective actions. Under the MDR, authorities can impose restrictions on the placing on the market or require the withdrawal of devices from the market. These orders are often coordinated with Notified Bodies, which play a central role in verifying that corrective measures restore conformity.

Enforcement and Sanctions for Non-Compliance

If an operator fails to comply with a corrective order, authorities can escalate. This may involve periodic penalty payments—fines calculated per day of delay—aimed at compelling compliance rather than punishing past conduct. In extreme cases, authorities can prohibit the offering of the service or the making available of the product in the Union. These measures are subject to judicial review, but they take immediate effect unless suspended by the authority or a court.

The practical risk is that non-compliance can trigger a cascade of enforcement actions across Member States. Under the GDPR’s one-stop-shop mechanism, the lead authority’s decision binds other concerned authorities, but dissenting opinions and local enforcement nuances remain possible. In the AI Act’s supervision of high-risk systems, the AI Office may intervene where there is a cross-border impact, potentially harmonising or intensifying the response.

Case Examples Across Sectors

In data protection, DPAs have issued orders requiring the deletion of illegally processed personal data, the cessation of behavioural advertising without valid consent, and the implementation of data protection by default. In AI, authorities have required the suspension of facial recognition systems used in public spaces without a proper legal basis or the introduction of bias mitigation measures for recruitment algorithms. In medical devices, authorities have mandated corrective actions for devices with software flaws that could affect diagnosis or treatment, requiring manufacturers to update software and notify healthcare providers.

These examples illustrate a common pattern: authorities focus on the source of the risk and the process that allowed it to emerge. The goal is not merely to stop a specific outcome but to ensure that the system is designed and operated to prevent recurrence.

Negotiated Remedies: Compliance Through Dialogue

European regulators increasingly use negotiated remedies to resolve complex cases without resorting to protracted litigation. These remedies can take the form of commitments, undertakings, or compliance agreements. They are particularly common in competition law (e.g., the European Commission’s commitments procedure under Article 9 of Regulation 1/2003) and in data protection (e.g., DPAs accepting binding commitments from companies to modify practices). The AI Act and product safety regimes also allow for structured remediation plans, often in the context of conformity assessments or post-market surveillance.

Commitments and Undertakings

Commitments are proposals by the concerned entity to alter its behaviour to address the authority’s concerns. They can include architectural changes to a platform, the introduction of interoperability measures, the separation of data processing activities, or the adoption of specific technical standards. Once accepted by the authority, commitments become legally binding. Failure to honour them can lead to fines and the reopening of the case.

Commitments offer predictability and allow companies to avoid a formal finding of infringement. However, they also require careful drafting. The commitments must be verifiable, enforceable, and proportionate. Vague promises to “improve fairness” or “enhance transparency” are unlikely to be accepted. Instead, authorities expect measurable actions—such as publishing detailed documentation, implementing specific technical controls, or submitting to independent audits.

Compliance Agreements and Structured Remediation

In sectors like medical devices or AI, authorities may enter into compliance agreements that set out a roadmap for achieving conformity. These agreements can include milestones for updating technical documentation, conducting additional clinical investigations, or implementing post-market monitoring plans. They are often accompanied by interim measures, such as restrictions on the use of a device or system until specific milestones are met.

Compliance agreements are particularly useful where the legal framework is new or where the technology is evolving. They allow authorities to signal expectations while giving operators time to adapt. They also create a record of cooperation that can be relevant in determining the level of any eventual fine.

Benefits and Risks of Negotiation

Negotiated remedies can reduce uncertainty and minimise disruption. They allow the authority to achieve compliance faster than through litigation and give the operator a degree of control over the remediation process. However, there are risks. Commitments may be interpreted narrowly by the authority, and any perceived backsliding can trigger enforcement. Public and stakeholder scrutiny can also be intense, particularly in high-profile cases. Operators must ensure that commitments are operationally feasible and aligned with their broader compliance strategy.

Ongoing Supervision: The New Normal

Regulatory supervision is no longer a one-off event but a continuous process. The AI Act’s post-market surveillance regime, the GDPR’s accountability obligations, and the MDR’s vigilance requirements all embed ongoing oversight into the lifecycle of products and services. This shift reflects the reality that risks in AI and data-driven systems can emerge over time due to model drift, changes in data sources, or evolving usage contexts.

Post-Market Surveillance and Continuous Conformity

Under the AI Act, providers of high-risk AI systems must implement post-market surveillance systems to collect and analyse data on performance, identify emerging risks, and take corrective actions. This includes reporting serious incidents to market surveillance authorities. Similarly, the MDR requires manufacturers to maintain vigilance systems, report field safety corrective actions, and update clinical evaluations.

These obligations are not merely administrative; they require technical infrastructure for logging, monitoring, and incident detection. Authorities can request access to this data and use it to assess whether a system remains compliant. In practice, this means that operators must design their systems from the outset to support regulatory oversight.

Reporting Obligations and Incident Management

Reporting obligations are a core component of ongoing supervision. The GDPR requires certain breaches to be reported to DPAs within 72 hours. The AI Act mandates reporting of serious incidents within 15 days of becoming aware of them. The DSA requires platforms to report systemic risks and measures taken to mitigate them. These timelines are strict, and failure to report can itself trigger enforcement.

Effective incident management requires clear internal processes, trained personnel, and robust documentation. Authorities increasingly expect to see evidence of root cause analysis, corrective actions, and preventive measures. In some cases, they may require the operator to conduct independent audits or assessments and submit the results.

Supervisory Patterns Across Member States

While EU rules are harmonised, supervisory practices vary. Some DPAs are more interventionist, issuing frequent orders and fines; others prefer guidance and dialogue. In product safety, authorities with strong technical expertise may conduct detailed inspections and testing, while others rely more on manufacturer documentation. The AI Office is expected to develop common supervisory practices, but national authorities retain significant discretion. Operators with a presence in multiple Member States must therefore be prepared for different expectations and enforcement styles.

Interaction with the Judiciary: Limits and Avenues

Administrative enforcement does not exclude judicial review. Operators can challenge decisions before national courts, which may refer questions to the Court of Justice of the European Union (CJEU) for interpretation. Courts can annul or modify decisions, impose procedural safeguards, and clarify the scope of regulatory powers. However, judicial proceedings are typically slower than administrative enforcement, and interim measures are not always available.

The relationship between administrative orders and judicial decisions is complementary. Authorities act swiftly to address risks, while courts ensure legality and proportionality. This dynamic creates a layered enforcement environment where compliance must be managed in parallel with potential litigation.

Practical Implications for Operators

Understanding the enforcement landscape is critical for designing compliant systems and processes. Operators should anticipate that authorities will use a mix of warnings, orders, negotiated remedies, and ongoing supervision. The emphasis is on demonstrable compliance and continuous improvement.

Designing for Compliance and Supervision

Compliance must be embedded in the product lifecycle. This includes conducting impact assessments (e.g., DPIAs, AI risk assessments), documenting conformity, implementing technical controls for transparency and accountability, and establishing post-market surveillance systems. Authorities expect to see evidence of governance structures, such as compliance committees, risk management frameworks, and clear roles and responsibilities.

For AI systems, this means ensuring that data is lawfully sourced and processed, that models are tested for bias and robustness, that human oversight is meaningful, and that logs are maintained to support traceability. For medical devices, it means maintaining a quality management system, updating clinical evaluations, and reporting incidents promptly.

Managing Cross-Border Complexity

Operators with a cross-border footprint must navigate the one-stop-shop and mutual recognition mechanisms. A corrective order issued by a lead authority in one Member State can have immediate effect across the EU. Coordination with local teams is essential to ensure consistent implementation. Where the AI Office is involved, operators may need to engage at both national and EU levels.

It is also important to monitor guidance from EU bodies and national authorities. The EDPB, for example, issues guidelines on GDPR interpretation that influence enforcement. The AI Office will likely publish best practices for post-market surveillance and conformity assessments. Aligning internal policies with these evolving standards reduces the risk of enforcement surprises.

Engaging with Authorities

Early engagement can be beneficial. When an authority raises concerns, a constructive response that acknowledges issues and proposes concrete remediation can lead to negotiated remedies rather than coercive orders. However, engagement must be informed by legal advice and operational feasibility. Commitments that cannot be implemented risk escalating enforcement.

Transparency is also important. Providing clear documentation, explaining technical choices, and demonstrating a culture of compliance can influence the authority’s assessment of proportionality and good faith. Conversely, opacity or defensiveness can lead to more stringent measures.

Comparative Perspectives: National Nuances

Despite EU harmonisation, national enforcement cultures differ. In Germany, authorities like the Bundesdatenschutzbeauftragte are known for rigorous enforcement and detailed guidance. In France, the CNIL has been active in shaping the interpretation of GDPR principles, particularly around consent and cookies. In Ireland, the DPC’s role as a lead authority for many tech companies results in a high volume of cross-border cases and a focus on procedural coordination.

For product safety, authorities in countries with strong industrial bases—such as Germany, France, and the Netherlands—often have deep technical expertise and conduct frequent inspections. In some smaller Member States, resources may be more limited, leading to a greater reliance on manufacturer self-assessment and targeted interventions. Understanding these nuances helps operators tailor their compliance strategies and anticipate potential enforcement patterns.

Emerging Trends and Future Directions

Several trends are shaping the future of administrative enforcement in Europe. First, there is a move toward greater harmonisation of supervisory practices, particularly under the AI Act and the DSA. The AI Office’s role in coordinating supervision of high-risk AI systems and GPAI models will likely lead to more consistent expectations across the Union.

Second, authorities are increasingly using data-driven supervision. They request access to logs, model cards, and monitoring data, and they may use their own technical tools to audit systems. This requires operators to ensure that their systems are not only compliant but also inspectable.

Third, there is growing emphasis on accountability and explainability. Authorities want to understand how decisions are made, what data is used, and how risks are managed. This is not just a legal requirement; it is a practical necessity for gaining and maintaining trust.

Finally, the interplay between enforcement and other policy goals—such as innovation, competitiveness, and security—will continue to evolve. Regulators are mindful of the need to avoid stifling innovation, but they are equally committed to protecting fundamental rights and safety. The use of negotiated remedies and structured remediation reflects this balance, but the underlying message is clear: compliance is not optional, and supervision is here to stay.

Conclusion: Navigating Enforcement Without Court

Enforcement in the European regulatory landscape is increasingly administrative, continuous, and collaborative. Corrective orders, negotiated remedies, and ongoing supervision are the primary tools through which authorities ensure that complex technologies are safe, fair, and lawful. For operators, this means that compliance is not a one-time event but a sustained commitment to governance, documentation, and technical excellence. By designing systems with oversight in mind, engaging constructively with authorities, and maintaining a proactive posture on risk management, organisations can navigate enforcement without court and build resilient, trustworthy products and services.

Table of Contents
Go to Top