Approval Workflows for AI Use Cases
Organisations across Europe, from public sector bodies to private enterprises in biotech and robotics, are increasingly compelled to formalise how they develop and deploy artificial intelligence. The era of ad-hoc experimentation is closing, replaced by a regulatory environment that demands structure, evidence, and accountability. The European Union’s Artificial Intelligence Act (AI Act) serves as the centrepiece of this shift, establishing a horizontal framework that imposes obligations on providers and deployers of AI systems. However, the AI Act does not exist in a vacuum; it intersects with the GDPR, the Data Governance Act, the Digital Services Act, and national public procurement and administrative laws. To navigate this complex landscape, organisations need a robust internal approval workflow for new AI use cases. This is not merely a bureaucratic exercise; it is a risk management and compliance necessity. A well-designed workflow ensures that innovation can proceed safely, that regulatory obligations are identified early, and that decisions are documented for supervisory authorities.
This article outlines a comprehensive approval workflow tailored to the European regulatory context. It moves through the lifecycle of an AI use case—from initial intake to continuous monitoring—explaining the legal and technical checkpoints required at each stage. The approach is pragmatic, recognising that while the AI Act sets the rules, the implementation requires translating abstract principles into concrete operational procedures. We will examine how to screen for risk, how to conduct a conformity assessment, the role of documentation and data protection impact assessments, and how to manage post-market monitoring. Throughout, we distinguish between the obligations of AI providers (those who develop the system) and deployers (those who use it), as their internal workflows will differ in focus.
Phase 1: Intake and Initial Scoping
The approval workflow begins with a standardised intake process. This is the entry point for any proposal to use or develop AI, whether it is a new machine learning model for predictive maintenance in manufacturing, a generative AI tool for drafting communications in a public administration, or a diagnostic algorithm in a healthcare setting. The goal of this phase is to capture essential information without overwhelming the proposer, while flagging obvious high-risk or high-complexity projects immediately.
Defining the Use Case and Context
The intake form or ticketing system must require a clear description of the intended purpose. Under the AI Act, the intended purpose is a critical legal concept; it defines the scope of the provider’s obligations and the conditions under which the system is considered safe and compliant. Vague descriptions such as “improve efficiency” are insufficient. The intake must specify:
- The specific task the AI is performing (e.g., classifying resumes, monitoring traffic flow, interpreting medical images).
- The operational context: Is this a back-office tool, a front-facing customer service application, or a critical infrastructure component?
- The user profile: Who will operate the system? Are they trained experts or laypersons?
- The data sources: What data will be used for training, fine-tuning, and operation? Is it personal data?
Crucially, the intake must identify the role of the organisation. Is it developing the AI system to place it on the market or put it into service under its own name (a provider)? Or is it using an AI system developed by a third party (a deployer)? This distinction is fundamental. A provider must ensure the system complies with the AI Act’s conformity assessment procedures before deployment. A deployer must ensure they use the system in accordance with the instructions and manage the risks arising from their specific use context.
Initial Legal and Ethical Screening
At this early stage, a preliminary screening is necessary to identify “knock-out” criteria. The most significant is whether the proposed AI use case falls under the prohibited practices defined in Article 5 of the AI Act. This includes:
- AI that deploys subliminal techniques to materially distort behaviour.
- AI that exploits vulnerabilities of a specific group.
- Biometric categorisation systems that infer sensitive attributes (e.g., race, political opinions) for law enforcement purposes (with limited exceptions).
- Untargeted scraping of facial images from the internet or CCTV to build facial recognition databases.
- Emotion recognition in the workplace and educational institutions (with some exceptions for safety or medical purposes).
If the proposed use case falls into any of these categories, the workflow should trigger an immediate escalation to legal counsel and senior leadership, as the project is likely unimplementable in the EU. Beyond these prohibitions, the screening should also check for conflicts with fundamental rights, existing collective agreements (in the case of workplace AI), or specific national prohibitions. For instance, some Member States may have stricter rules on AI in public services or specific data-sharing restrictions.
Phase 2: Risk Screening and Classification
Once the use case passes the initial screening, it enters a formal risk screening phase. The objective is to classify the AI system according to the risk-based approach of the AI Act. This classification dictates the entire subsequent regulatory pathway, including the required conformity assessments, documentation, and oversight.
Identifying High-Risk AI Systems
The AI Act defines high-risk AI systems in two ways: (1) AI systems that are safety components of products covered by specific EU harmonisation legislation (e.g., medical devices, machinery, elevators, cars), and (2) AI systems listed in Annex III. The latter covers critical domains such as:
- Critical infrastructure (e.g., transport, energy).
- Educational and vocational training (e.g., grading exams).
- Employment and worker management (e.g., CV-sorting, performance evaluation).
- Access to essential private and public services (e.g., credit scoring, eligibility for public benefits).
- Law enforcement, migration, border control, and administration of justice.
- Biometric identification and categorisation.
The screening process must involve a cross-functional team, including legal, technical, and domain experts. They must ask: does this system perform any of the functions listed above? If yes, it is likely a high-risk AI system. However, there is a nuance. An AI system listed in Annex III is only high-risk if it is used in that specific context. For example, an AI tool used for spam filtering in email is not high-risk, but the same underlying technology used to screen job applications is. The screening outcome must be documented, including the rationale for the classification. Incorrectly classifying a high-risk system as low-risk is a significant compliance failure.
Assessing Limited and Minimal Risk
AI systems that are not high-risk are subject to lighter obligations. Limited-risk AI systems, such as chatbots or AI-generated content (deepfakes, synthetic text/images), have transparency obligations. Users must be informed that they are interacting with an AI system, and AI-generated content must be clearly labelled as such. Minimal-risk AI (e.g., spam filters, video games) has no mandatory legal obligations under the AI Act, though general data protection and consumer protection laws still apply. The screening should confirm the classification and determine the applicable transparency duties.
The Intersection with Data Protection
Parallel to the AI Act classification, a data protection screening is mandatory under GDPR. If the AI system processes personal data, it triggers a Data Protection Impact Assessment (DPIA) if the processing is likely to result in a high risk to the rights and freedoms of natural persons. High-risk AI systems that process personal data will almost certainly require a DPIA. The screening should identify the legal basis for processing (e.g., consent, legitimate interest, public task) and map data flows. This is a critical step where national Data Protection Authorities (DPAs) have significant enforcement power. In countries like France (CNIL) or Germany (state DPAs), the expectations for DPIAs are very high.
Phase 3: Detailed Assessment and Conformity
For AI systems classified as high-risk, the assessment phase is the most rigorous. This is where the technical development and legal compliance converge. The goal is to prepare the system for the conformity assessment procedure required by the AI Act.
Technical Documentation and Risk Management
The provider of a high-risk AI system must establish, implement, document, and maintain a risk management system. This is a continuous process, not a one-time check. It must cover the entire lifecycle of the system. Key elements include:
- Identification and analysis of known and foreseeable risks: What can go wrong? This includes risks to health, safety, fundamental rights, and the environment.
- Estimation of risks: How likely is the harm, and how severe?
- Evaluation of emerging risks: As the system learns or the context changes, new risks may arise.
- Adoption of risk management measures: These must be effective and proportionate. They can include technical solutions (e.g., human oversight, accuracy controls) or organisational measures (e.g., training, procedural safeguards).
The technical documentation must be extensive. It is the evidence of compliance. It must include:
- A general description of the AI system.
- Elements of the AI system and its development process: algorithms, data sets, training methodologies, testing protocols.
- Monitoring, functioning, and control of the AI system.
- Harmonised standards and common specifications applied.
- Detailed records of the risk management measures.
For organisations using a third-party high-risk AI system (deployers), the assessment phase focuses on different aspects. They must verify that the provider has supplied the required documentation (including the EU Declaration of Conformity and instructions for use). They must assess whether the system is suitable for their specific purpose and identify any additional risks arising from their operational context. They must also ensure they have the human and technical capacity to implement the required human oversight.
Data Governance and Bias Mitigation
The quality of the data used to train, validate, and test high-risk AI systems is a central focus of the assessment. The AI Act requires that data sets be relevant, representative, free of errors, and complete. They must also be as free as possible of biases, especially those that could lead to discrimination. This is a significant technical and legal challenge.
The assessment must scrutinise the data governance practices. This includes:
- Data collection processes: Were the data collected lawfully (respecting GDPR)?
- Data labelling: Is the labelling process reliable and free from human bias?
- Pre-processing and feature engineering: Do these steps introduce or amplify bias?
- Bias detection and mitigation techniques: What metrics are used to measure fairness? What steps are taken to mitigate identified biases?
In practice, this means conducting rigorous statistical analysis of datasets to ensure they represent different demographic groups where relevant. For example, a facial recognition system trained primarily on one ethnicity will fail the representativeness requirement. The assessment must document these checks and the results. This is an area where technical evidence is required to satisfy legal obligations.
Human Oversight and Explainability
High-risk AI systems must be designed to enable effective human oversight. This is not just a recommendation; it is a legal requirement aimed at preventing or minimising risks to health, safety, or fundamental rights. The assessment must evaluate:
- Who has oversight? Is the overseer trained and competent?
- Can the overseer understand the system’s capacities and limitations? This links to the requirement for interpretability.
- Can the overseer interpret the results and override the system? The system must not be a “black box” if a human is to be accountable for the final decision.
For high-risk AI used in decision-making affecting individuals (e.g., credit scoring, recruitment), the system must be designed to be interpretable. The provider must provide “instructions for use” that explain the system’s logic and capabilities to the deployer. The assessment must verify that these instructions are clear and that the system’s output (e.g., a risk score, a classification) is accompanied by information that allows the human overseer to understand the factors that led to the decision. This is crucial for accountability and for individuals exercising their rights under GDPR (e.g., the right to an explanation of a decision based on automated processing).
Conformity Assessment and CE Marking
Once the provider has completed the technical documentation, risk management, and data governance checks, they must perform the conformity assessment. For most high-risk AI systems, this is based on an internal control procedure (the provider self-assesses compliance). However, for certain high-risk AI systems listed in Annex III (e.g., biometric systems, AI for critical infrastructure), the conformity assessment must involve a third-party Notified Body. This is a significant difference. The workflow must identify if a Notified Body is required and, if so, initiate the engagement early, as this process can be lengthy.
The conformity assessment procedure culminates in the issuance of an EU Declaration of Conformity and the affixing of the CE marking to the AI system. This is the legal act by which the provider declares that the system complies with the AI Act. For deployers, the presence of a valid EU Declaration of Conformity is a key checkpoint in their approval workflow. They should not deploy a high-risk AI system without it, unless it is a legacy system already in the supply chain before the relevant deadlines.
Phase 4: Formal Approval and Documentation
With the conformity assessment complete (or the necessary evidence gathered for deployers), the use case moves to a formal approval gate. This is a governance decision, not just a technical one. The objective is to ensure that all stakeholders accept the residual risks and that the deployment is formally authorised.
The Approval Committee
For high-risk AI, approval should be granted by a formal committee or a designated senior responsible owner. This body should include representatives from:
- Legal and Compliance (to verify regulatory adherence).
- IT and Data Science (to confirm technical readiness).
- Business Operations (to confirm operational readiness and value).
- Data Protection (to sign off on the DPIA).
- Where relevant, Ethics, HR, or Works Council representatives.
The committee reviews the full assessment package: the risk classification, the technical documentation, the DPIA, the bias analysis results, and the conformity evidence. They are not expected to be technical experts in AI algorithms, but they must be satisfied that the experts have done their job and that the documented evidence supports the decision to deploy.
The Decision Record
The outcome of the approval gate must be a formal, written record. This document is vital for demonstrating accountability to regulators and auditors. It should include:
- A clear description of the approved AI use case.
- The date of approval and the names of the approvers.
- A summary of the key risks identified and the mitigation measures put in place.
- Confirmation of the AI Act risk classification and, if high-risk, the conformity assessment method used (internal or Notified Body).
- Confirmation that the DPIA has been completed and its recommendations implemented.
- Any specific conditions or limitations on the deployment (e.g., “approved for use only in Region X,” or “must be used with a human in the loop for all decisions”).
- References to the location of the full documentation.
For deployers, the approval record should also include a summary of the due diligence performed on the provider (e.g., review of the EU Declaration of Conformity, instructions for use, and technical support capabilities). Deployers are accountable for the use of the system, so their approval process must verify the provider’s compliance.
Transparency Obligations
Before the AI system goes live, the approval process must ensure that all required transparency measures are in place. This is not just about internal documentation; it concerns external communication. For high-risk AI systems used to make decisions about individuals, the deployer must inform the affected individuals that they are subject to the use of the system (unless this is impossible in the context of a security operation, for example). For emotion recognition or biometric categorisation systems, the individuals must be informed. For AI-generated content (deepfakes, etc.), it must be labelled as such. The workflow must confirm that the user interfaces and communication materials have been updated to meet these obligations.
Phase 5: Post-Market Monitoring and Continuous Review
Approval is not the end of the workflow. The AI Act introduces a strict obligation for post-market monitoring. This phase ensures that the AI system remains safe, effective, and compliant throughout its lifecycle.
The Post-Market Monitoring Plan
Providers of high-risk AI systems must establish and implement a post-market monitoring plan. This plan should be based on a risk-based approach and must allow the provider to collect and analyse data on the performance of the AI system in the real world. The plan should specify:
- What data will be collected (e.g., accuracy metrics, error rates, user feedback, incident reports).
- How and when data will be collected.
- The analysis methods to be used.
- The reporting channels for users to report issues.
Deployers have a crucial role here. They must monitor the operation of the high-risk AI system and report any serious incidents or malfunctions to the provider. They must also report to the relevant national authorities if the incident had implications for fundamental rights or health and safety. The approval workflow must therefore establish clear internal procedures for users to report issues, and for the organisation to triage these reports and communicate with the provider or authorities as required.
