< All Topics
Print

Building a Conflict-Ready Compliance Strategy

Organisations operating within the European digital landscape are currently navigating a period of profound regulatory convergence. The implementation of the AI Act, the enforcement of the GDPR, the Digital Operational Resilience Act (DORA), and the Data Act create a dense web of obligations that often intersect, overlap, and occasionally contradict one another. Compliance is no longer a static checklist; it is a dynamic capability. To build a strategy that is resilient to these conflicting requirements, one must move beyond simple policy adoption and engineer a systemic approach to governance. This requires a shift in perspective from viewing compliance as a cost centre to viewing it as a critical component of operational architecture. We are moving from an era of “compliance by design” as a theoretical ideal to “conflict-readiness” as a practical necessity. This article outlines the structural components of such a strategy, focusing on decision logs, legal review triggers, modular policies, and escalation mechanisms.

The Regulatory Landscape: A Multiplicity of Overlapping Mandates

Understanding the source of conflict is the first step in designing a strategy to manage it. The European Union is not introducing regulations in a vacuum; new frameworks are layered upon existing ones, creating a complex compliance topology. The most significant source of tension is the intersection of the General Data Protection Regulation (GDPR) and the Artificial Intelligence Act (AI Act). While both aim to protect fundamental rights, their operational logics diverge.

The GDPR is fundamentally a data rights framework. It prioritises individual autonomy, transparency, and purpose limitation. The AI Act, conversely, is a product safety and fundamental rights risk management framework. It categorises systems based on their potential to cause harm, imposing obligations on providers and deployers that are risk-proportional. A conflict arises when a data subject exercises their right to erasure under Article 17 of the GDPR, but that same data is required to maintain the integrity of a high-risk AI system’s training dataset or to audit its performance under the AI Act. Similarly, the right to explanation under GDPR (Article 22) is often interpreted as a right to meaningful information about the logic involved, whereas the AI Act mandates detailed documentation of the system’s development process, including data sources, training methods, and evaluation metrics. The former is user-facing; the latter is regulator-facing, and the information required to satisfy one may not be sufficient for the other.

Further complexity is introduced by the Digital Services Act (DSA) and the Digital Markets Act (DMA), which impose transparency and fairness obligations on platform operators, and DORA, which mandates rigorous ICT risk management for the financial sector. A financial institution using a high-risk AI system for credit scoring must simultaneously satisfy DORA’s operational resilience requirements, GDPR’s data protection principles, and the AI Act’s conformity assessments. The documentation required for a DORA incident report might conflict with the confidentiality obligations required to protect trade secrets under the AI Act. This is not a theoretical problem; it is a daily operational reality for compliance officers, data scientists, and legal teams.

The Principle of Conflict-Readiness

A conflict-ready strategy acknowledges that perfect alignment is impossible. Instead of seeking a single, unified policy that covers all scenarios—a “master policy”—the objective is to create a system that can identify, analyse, and resolve conflicts efficiently and defensibly. This system must be embedded within the organisation’s operational workflows, not siloed within a legal department. It requires a fusion of legal expertise, technical understanding, and process engineering. The strategy must be proactive, anticipating points of friction before they become violations, and reactive, providing clear pathways for resolution when conflicts emerge in real-time.

Component 1: The Decision Log as a Legal Artifact

The foundation of a conflict-ready strategy is the institutionalisation of decision-making. In the context of complex regulatory environments, memory is a compliance asset. A decision log is not merely an audit trail; it is a structured record of the rationale behind specific interpretations of law, policy applications, and technical implementations. It serves as the organisation’s collective memory, ensuring that decisions made under pressure are not lost to time and can be defended later to regulators, auditors, or courts.

Structuring the Log for Regulatory Scrutiny

A robust decision log must be more than a simple chronological record. It needs to be structured to withstand regulatory scrutiny. Each entry should capture several key elements:

  • The Triggering Event: What specific situation prompted the decision? (e.g., “A data subject requested erasure of data used to train a fraud detection model.”)
  • The Conflicting Requirements: Which specific articles of which regulations are in tension? (e.g., “GDPR Art. 17 (Right to Erasure) vs. AI Act Art. 61 (Post-Market Monitoring) and Art. 10 (Data Governance).”)
  • The Stakeholders: Who was involved in the decision? (e.g., “Data Protection Officer, Head of AI Governance, Legal Counsel, Lead Data Scientist.”)
  • The Analysis: This is the most critical part. It must document the legal and technical reasoning. Did the team consult the EDPB guidelines? Did they review the AI Act recitals? What technical measures were considered to mitigate the conflict (e.g., anonymisation, synthetic data generation)?
  • The Decision and Justification: The final resolution and the core justification for it. (e.g., “Decision: Data will be pseudonymised and retained for audit purposes. Justification: The residual risk to the data subject is minimal, and retention is necessary to comply with the AI Act’s obligations for high-risk systems, which constitutes a legal obligation under GDPR Art. 6(1)(c).”)
  • The Timeline: When was the decision made, and when is it scheduled for review?

This structured approach transforms a reactive conversation into a documented, defensible process. When a regulator from a national Data Protection Authority (DPA) or the European Data Protection Board (EDPB) asks why a certain action was taken, the organisation can produce a log that demonstrates a thoughtful, multi-stakeholder process. This is far more effective than relying on the recollections of individual employees.

Linking Decisions to Technical Implementation

The decision log must be linked directly to the technical architecture. A decision to retain data for AI Act compliance must be reflected in the data retention policies of the database administrators. A decision to implement a specific privacy-enhancing technique must be documented in the system’s design specifications. This requires a tight integration between the legal/compliance function and the engineering teams. The decision log acts as the bridge, translating legal reasoning into technical requirements. Without this link, the log is just a record of good intentions, not a tool for operational control.

Component 2: Automated and Manual Legal Review Triggers

Waiting for a quarterly compliance review is insufficient in a fast-moving environment. A conflict-ready strategy relies on a system of triggers that bring the right people into the room at the right time. These triggers can be automated through technology or defined by manual processes. The goal is to interrupt standard workflows when a potential conflict is detected, forcing a conscious decision rather than allowing a default path to be taken.

Identifying High-Risk Scenarios

Triggers should be designed around the organisation’s specific risk profile. Common triggers include:

  • Model Retraining: When an AI model is scheduled for retraining with new data, this should trigger a review. Does the new data have the correct legal basis under GDPR? Does the retraining process alter the model’s risk profile under the AI Act, potentially requiring a new conformity assessment?
  • Data Subject Requests (DSARs): A request for access or erasure involving a system classified as high-risk under the AI Act should automatically flag for legal review. The standard operational response for a DSAR may not be compliant when AI Act obligations are also in play.
  • System Updates or Modifications: Any significant change to a system’s architecture, logic, or purpose is a potential trigger. The AI Act requires that high-risk systems be reviewed for any substantial modifications. This process must be cross-referenced with change management procedures under DORA or general data protection impact assessments (DPIAs).
  • Incident Detection: The discovery of a bias, error, or security breach in an AI system triggers multiple, potentially conflicting obligations. DORA requires incident reporting to financial authorities. The GDPR requires notification to the DPA within 72 hours if there is a risk to individual rights. The AI Act will require reporting of serious incidents to the market surveillance authority. The timing, content, and recipients of these reports may differ. A trigger ensures a coordinated response.

Technical Implementation of Triggers

Automation can enforce these triggers. For example, a data science platform can be configured to prevent a model from being promoted to production unless a compliance checklist is completed. An API gateway can flag requests that involve data from a subject who has an active erasure request. These technical controls ensure that the review process is not dependent on human memory or diligence. They embed compliance into the software development lifecycle (SDLC) and operational pipelines.

Component 3: Modular Policies for Adaptive Governance

The traditional approach of a single, monolithic corporate policy document is brittle. It breaks under the pressure of conflicting regulations. A conflict-ready strategy adopts a modular approach to policy-making. This involves creating a library of core policy components that can be combined and reconfigured as needed to address specific regulatory contexts. This is akin to building with Lego bricks rather than carving from a single block of stone.

The Core and the Satellite Model

A modular policy architecture typically consists of a stable core and flexible satellites.

  • The Core: These are the organisation’s fundamental principles and values that are non-negotiable and apply across all jurisdictions and technologies. Examples include a commitment to ethical AI, a zero-tolerance policy for data misuse, and the principle of accountability. The core is stable and rarely changes.
  • The Satellites: These are specific procedures, technical standards, and operational guidelines that implement the core principles in response to specific regulatory requirements. For example, the core principle of “transparency” might be implemented by:
    • A satellite policy for GDPR: Focusing on privacy notices, consent mechanisms, and data subject rights.
    • A satellite policy for the AI Act: Focusing on user information obligations for high-risk systems, instructions for use, and technical documentation.
    • A satellite policy for the DSA: Focusing on transparency reports and terms of service for platform users.

When a new regulation appears, or when a conflict is identified, the organisation does not need to rewrite its entire governance framework. It simply needs to develop or update the relevant satellite policy, ensuring it aligns with the core. This makes the governance framework more agile and resilient.

Managing Conflicts within the Modular Framework

The modular approach also provides a clear structure for managing conflicts. When a conflict arises, it is identified as a “clash” between two satellite policies. The resolution process then focuses on determining which satellite takes precedence in that specific context, or how to create a new, temporary “bridge” policy that satisfies both. For instance, if a new data localisation law conflicts with a policy on cloud-based data processing, the organisation can develop a specific satellite policy for that jurisdiction (e.g., “Data Processing in Country X”) that details the required localisation measures, without altering the global cloud policy for other regions. This compartmentalisation contains the impact of regulatory conflicts.

Component 4: Escalation and Resolution Pathways

Even with the best logs, triggers, and policies, conflicts will occur that cannot be resolved at the operational level. A conflict-ready strategy must therefore include clearly defined escalation pathways. These pathways are not just about “telling the boss”; they are structured processes for escalating a problem to the appropriate level of expertise and authority.

The Triage and Escalation Matrix

An effective escalation pathway begins with a triage process. When a conflict is identified, the initial responders (e.g., a data scientist or a compliance officer) need a clear matrix to determine the severity and nature of the issue. The matrix should ask:

  • Does this conflict involve a fundamental right (e.g., non-discrimination, privacy)?
  • Does this conflict involve a high-risk AI system?
  • Is there a strict legal deadline for action (e.g., a 72-hour breach notification)?
  • Could this issue result in a significant financial penalty or reputational damage?

Based on the answers, the issue is routed to the correct channel. Low-level conflicts might be resolved by a compliance working group. High-level conflicts involving fundamental rights or significant financial risk must be escalated immediately to a dedicated AI Governance Board or a cross-functional ethics committee. This board should include representatives from legal, technology, ethics, and business units, ensuring a holistic view of the problem.

The Role of the AI Governance Board

The AI Governance Board (or equivalent body) is the ultimate arbiter of unresolved conflicts. Its role is not just to make a decision, but to ensure that the decision is documented in the central decision log and that the outcome is used to update policies, triggers, or training materials. It is a learning mechanism. The Board’s decisions set precedents that guide the organisation in the future. For example, if the Board decides that the AI Act’s post-market monitoring requirements override a GDPR erasure request for a specific high-risk system, that decision becomes a guiding case study for handling similar situations. This creates a body of internal jurisprudence that strengthens the organisation’s compliance posture over time.

External Consultation

Escalation pathways should also include routes for external consultation. In highly ambiguous situations, it may be necessary to seek guidance from regulatory sandboxes, industry associations, or legal experts specialising in the relevant fields. Proactively engaging with a National Competent Authority (NCA) or a DPA through a regulatory sandbox can provide clarity on how to interpret conflicting obligations. This demonstrates a proactive and good-faith effort to comply, which is a mitigating factor in any subsequent enforcement action.

Operationalising the Strategy: A Practical Example

To illustrate how these components work together, consider a hypothetical scenario: A European healthcare provider uses a high-risk AI system to assist in diagnosing medical conditions from imaging data. The system is trained on a vast dataset of patient images.

The Conflict: A patient exercises their right to erasure under GDPR, requesting that all their data, including historical medical images, be deleted. However, the AI system is a high-risk medical device under the AI Act. Its post-market surveillance plan requires continuous monitoring of its performance across a diverse dataset, and the specific patient’s images are critical for detecting rare conditions. Furthermore, the data is subject to record-keeping requirements under the Medical Device Regulation (MDR).

The Conflict-Ready Response:

  1. Trigger: The DSAR is received by the data protection office. The system flags it because the data is linked to a high-risk AI system. A legal review is automatically triggered.
  2. Decision Log Entry: A new entry is created. The trigger is noted: “GDPR Art. 17 request for data used in High-Risk AI System X.” The conflicting regulations are listed: GDPR vs. AI Act Art. 61 vs. MDR Art. 10.
  3. Analysis & Escalation: The compliance team, including the DPO and the lead for AI governance, analyses the case. They determine that a simple deletion would violate the AI Act and MDR. They cannot resolve this alone. The issue is escalated to the AI Governance Board.
  4. Board Deliberation (Modular Policy Application): The Board reviews the core principles (patient safety, data minimisation) and the relevant satellite policies (GDPR DSAR procedure, AI Act conformity maintenance procedure). They consult legal counsel and the system’s clinical safety officer. They note that GDPR provides an exception for “compliance with a legal obligation” (Art. 6(1)(c)) and for “reasons of public interest in the area of public health” (Art. 9(2)(i)).
  5. Resolution & Documentation: The Board decides that the data cannot be deleted. Instead, it must be pseudonymised to the highest degree possible, and access must be strictly limited to the post-market surveillance team. This decision is formally communicated to the patient, explaining the legal basis for the retention under the AI Act and MDR, which are considered “legal obligations.” The full reasoning, including the balance of interests and the specific legal articles relied upon, is documented in the decision log.
  6. Policy Update: The DSAR satellite policy is updated to include a specific procedure for handling requests involving data used in high-risk AI systems, incorporating the resolution pathway and the required communication templates.

This example shows how the four components—log, trigger, modular policy, and escalation—work in concert to navigate a complex conflict, producing a defensible, well-documented, and compliant outcome.

Conclusion: Building a Resilient Compliance Culture

The era of simple, siloed regulation is over. European organisations face a future where compliance is a continuous exercise in balancing competing legal and ethical demands. A conflict-ready strategy is not a one-time project but an ongoing commitment to building organisational resilience. It requires investment in technology, process, and, most importantly, people. By institutionalising decision-making through logs, automating vigilance with triggers, structuring governance with modular policies, and creating clear escalation pathways, organisations can move beyond a defensive, reactive posture. They can build a compliance capability that is not only robust enough to withstand regulatory scrutiny but is also a strategic asset that enables responsible innovation. The ultimate goal is to create an organisation that does not fear regulatory conflict but is equipped to handle it with confidence, clarity, and integrity.

Table of Contents
Go to Top