When AI Is a Medical Device: MDR Concepts Explained
Artificial intelligence systems that analyse medical images, predict disease progression, or recommend therapeutic interventions are increasingly embedded in clinical pathways across Europe. For many developers and healthcare providers, the pivotal question is not whether the technology is innovative, but whether it falls within the scope of the Medical Device Regulation (MDR) and, if so, how to meet compliance expectations in practice. This is a legal determination driven by the concept of intended purpose, and it has immediate consequences for classification, conformity assessment, clinical evidence, and post-market surveillance. Understanding the boundaries between regulated medical devices, wellness products, and clinical decision support tools used within institutional governance is essential for responsible deployment.
The European regulatory framework for medical devices is anchored in Regulation (EU) 2017/745 (the MDR), which became fully applicable on 26 May 2021. The MDR defines a medical device in Article 2(1) and sets out the obligations for manufacturers, importers, and distributors. For AI and software, the definition is interpreted through the scope of Annex I, which explicitly includes software intended for a medical purpose. The European Commission’s Medical Device Coordination Group (MDCG) has published guidance to clarify how these rules apply to software, including standalone AI, in the document MDCG 2020-11 on Software as a Medical Device (SaMD) and subsequent updates. While the term “Software as a Medical Device” (SaMD) is widely used internationally, EU law uses the term “device” and “software as a device” within the MDR framework; the substance is aligned, but the legal language is distinct.
When AI becomes a medical device: the role of intended purpose
The single most important criterion for bringing AI under the MDR is the intended medical purpose. The MDR defines a medical device as any instrument, apparatus, appliance, software, implant, reagent, material, or other article intended by its manufacturer to be used for human beings for one or more of the specific medical purposes set out in Article 2(1): diagnosis, prevention, monitoring, prediction, prognosis, treatment, or alleviation of disease; diagnosis, monitoring, treatment, alleviation of, or compensation for an injury or disability; investigation, replacement or modification of the anatomy or of a physiological or pathological process or state; or the information obtained from the in vitro examination of specimens derived from the human body. It must not achieve its intended purpose by pharmacological, immunological or metabolic means.
For AI, the intended purpose is typically expressed through claims in labelling, instructions for use, marketing materials, and the technical documentation. If an algorithm is marketed to clinicians as a tool to detect lung nodules in CT scans to support early diagnosis of lung cancer, it is a medical device. If the same algorithm is repackaged as a research tool for radiology departments to measure image quality, it may fall outside the MDR. The manufacturer’s claims matter, but so does the reasonably foreseeable use. The MDR requires consideration of misuse that is reasonably foreseeable; if a manufacturer knows that clinicians are likely to use the software for diagnostic decisions even if the labelling says “for research only,” the device may be considered to have a medical purpose in practice.
There is a boundary between clinical decision support and administrative or operational support. An AI that triages patients based on acuity scores to optimise emergency department flow is not necessarily a medical device if its output is purely operational. However, if the triage score is used to determine diagnostic pathways or treatment prioritisation with clinical implications, it is likely to be a device. The same applies to AI that analyses vital signs from wearables to predict deterioration; if the output is intended to inform clinical decisions, it is a device. Conversely, wellness apps that provide general lifestyle advice or fitness metrics without disease-specific claims typically fall outside the MDR, although they may be subject to other EU rules such as the General Data Protection Regulation (GDPR) or the AI Act.
Software as a device under the MDR
Article 2(1) of the MDR explicitly includes software in the definition of a medical device. Annex I, Chapter II, Section 19.1 further clarifies that devices incorporating electronic programmable systems and software intended to be used in combination with the device (or as an independent device) must be designed and manufactured to ensure compatibility, reliability, and repeatability. Standalone software that performs a medical function is therefore a device. This includes AI systems that process medical images, analyse laboratory results, interpret physiological signals, or provide predictive analytics for clinical use.
Software that drives a hardware device (e.g., the firmware of an infusion pump) is also a device. In practice, many AI-enabled products combine hardware and software; the software may be the device itself, or it may be an accessory to a device. The MDR defines an accessory as an article which, while not being itself a device, is intended to be used with a device to enable it to be used in accordance with its intended purpose. An AI model that enhances the performance of a diagnostic imaging device could be considered an accessory if it is marketed to support that device’s function. The regulatory obligations flow from that classification.
Information from in vitro examination
AI systems that process data derived from the human body for diagnostic purposes fall under the MDR’s scope when they are intended to provide information for diagnostic purposes. This includes algorithms that analyse histopathology slides, genetic sequencing data, or biomarker measurements. Such software is a device if its intended purpose is to provide diagnostic information to the user. If the same software is used purely for research and development by a pharmaceutical company to identify drug targets, and it is not intended to inform individual patient diagnosis, it may not be a device. Again, the intended purpose and labelling are decisive.
Borderline products: wellness, research, and general-purpose IT
Many AI systems are marketed as “general purpose” or “decision support” tools without explicit medical claims. The MDCG has emphasised that the intended purpose determines whether the product is a device. A spreadsheet or statistical package used by researchers to analyse clinical data is not a medical device if it is not intended for medical purposes. However, if the same software is configured and marketed to clinicians to diagnose a condition, it becomes a device. The same logic applies to AI platforms that can be configured for multiple use cases; the manufacturer must define the intended purpose clearly and ensure that the device’s design and risk management align with that purpose.
It is also important to distinguish between medical devices and in vitro diagnostic (IVD) medical devices. The MDR covers non-IVD devices; IVDs are regulated under Regulation (EU) 2017/746 (IVDR). AI that analyses laboratory results for diagnostic purposes may be an IVD device under IVDR rather than a medical device under MDR. The classification and conformity assessment pathways differ. For example, an algorithm that interprets blood test results to diagnose diabetes is likely an IVD device, while an algorithm that analyses ECG signals to detect arrhythmia is a medical device under MDR.
Classification logic for AI under the MDR
The MDR sets out classification rules in Annex VIII. Classification determines the conformity assessment pathway and the involvement of a notified body. Most AI-based devices are classified based on their intended purpose and the potential impact on patients. The rules consider the severity of the disease, the nature of the decision the device supports, and whether the device provides information that drives immediate clinical action.
Rule 11 in Annex VIII is particularly relevant for software. It states that software intended to provide information for diagnosing, monitoring, or treating serious conditions is generally Class IIa or higher, depending on the risks. Software that drives or influences the use of a device is classified at least as Class IIa. If the software presents a risk to patient health that is high, it can be Class IIb or Class III. For AI, the classification often hinges on whether the output is intended to inform decisions that could have a serious impact on patient health if incorrect.
As a practical example, an AI system that analyses chest X-rays to detect pneumonia and suggests treatment to a clinician is likely Class IIa or IIb, depending on the risk profile and the degree of automation. If the software automatically triggers treatment without human oversight, it may be Class IIb or even Class III. An AI that monitors vital signs to alert clinicians of early signs of sepsis is typically Class IIa if the alert is advisory. If the system automatically administers fluids or antimicrobials, the classification increases. The MDR requires manufacturers to apply the rules carefully and to document the rationale.
Classification is not static. A software update that changes the intended purpose or the risk profile may shift the classification. The MDR requires manufacturers to review classification when planning updates and to assess whether a new conformity assessment is needed. This is particularly relevant for AI systems that learn from new data; if the update changes the intended use or adds new diagnostic claims, the device may need reclassification and re-evaluation by a notified body.
Class I, IIa, IIb, and III: what changes in practice
Class I devices (with no measuring function) can be self-certified by the manufacturer under the MDR, provided they meet the general safety and performance requirements. Most AI devices are not Class I because they typically involve software that presents a risk or provide diagnostic information. Class IIa devices require a notified body’s involvement for design examination and quality management system (QMS) certification. Class IIb and III devices require more stringent notified body scrutiny, including clinical evaluation and, for some Class III devices, consultation with expert panels.
For AI, the notified body will examine the technical documentation, including software lifecycle processes, risk management, cybersecurity, and clinical evidence. The higher the class, the more robust the evidence must be. For Class IIb and III, manufacturers should expect detailed questions on algorithm validation, data representativeness, bias mitigation, and performance monitoring in the real world. The MDR’s expectations for clinical evaluation are not limited to initial approval; they extend to ongoing updates and post-market surveillance.
Software updates and change control
AI systems often evolve. The MDR distinguishes between minor changes and significant changes that affect the intended purpose or safety. Manufacturers must establish procedures to assess whether an update changes the classification or requires a new conformity assessment. For example, adding a new diagnostic indication is a significant change. Improving the algorithm’s accuracy without changing intended purpose may be a minor change if risk remains acceptable and the change is controlled. The MDCG guidance on significant change helps manufacturers plan updates; however, many AI updates may be considered significant due to the potential impact on performance and safety. Manufacturers should maintain a robust change control process and engage early with their notified body when uncertain.
Practical compliance expectations for AI manufacturers
Compliance under the MDR is a structured process anchored in the quality management system and technical documentation. For AI, there are specific areas that require attention beyond traditional device manufacturing.
Quality management system (QMS) and documentation
Manufacturers must implement a QMS compliant with ISO 13485. For software, this includes software development and lifecycle processes consistent with IEC 62304 (software lifecycle processes). The MDR does not mandate specific standards, but conformity to harmonised standards provides a presumption of conformity. In practice, notified bodies expect manufacturers to follow a defined software development process, with clear stages for requirements, design, coding, testing, and release. For AI, this includes data management processes, model training and validation, and procedures for handling updates.
Technical documentation must demonstrate that the device meets the general safety and performance requirements in Annex I. This includes device description, intended purpose, risk management file, product specifications, verification and validation evidence, labelling and instructions for use, and post-market surveillance plans. For AI, the documentation should also include a description of the data used to train and test the model, performance metrics, and measures taken to address bias and ensure robustness.
Clinical evaluation and evidence
The MDR requires clinical evaluation as a continuous process. Manufacturers must plan and conduct clinical evaluation based on clinical data relevant to the device and its intended purpose. For AI, clinical data can include retrospective studies, prospective validation, real-world performance monitoring, and literature reviews. The MDCG guidance on clinical evaluation of medical devices, including the MDCG 2020-13 and related documents, provides a framework for planning and documenting clinical evaluation. Notified bodies will look for a clear clinical development plan, a rationale for the chosen evidence generation strategy, and demonstration that the device’s performance is acceptable in the intended use context.
For AI, it is important to distinguish between technical performance (e.g., sensitivity, specificity, AUC) and clinical outcomes (e.g., impact on diagnostic accuracy, time to diagnosis, patient outcomes). While technical performance is necessary, the MDR expects manufacturers to link performance to the intended clinical purpose. For example, an AI that improves the detection of pulmonary embolism should provide evidence that it reduces missed diagnoses or improves workflow without increasing false positives that lead to unnecessary testing.
Risk management
Risk management must comply with ISO 14971. For AI, risk analysis should consider data-related risks (e.g., data quality, representativeness, bias), model-related risks (e.g., overfitting, drift, brittleness), and operational risks (e.g., integration into clinical workflows, user errors, cybersecurity). The risk management file should include measures to mitigate risks and evidence that residual risks are acceptable. Cybersecurity is particularly important for connected devices; manufacturers should follow recognised cybersecurity standards and guidance, including MDCG 2019-2 and related documents, and ensure secure software update mechanisms.
One practical challenge is managing the risk of algorithmic bias. The MDR requires that devices be designed and manufactured to avoid bias that could lead to unsafe performance. Manufacturers should document the demographic and clinical diversity of training and validation datasets, assess performance across subgroups, and implement monitoring to detect bias in real-world use. This is not only a regulatory expectation but also a patient safety imperative.
Usability and human factors
Usability is a core safety requirement. For AI, usability includes how the algorithm’s outputs are presented to clinicians, the clarity of uncertainty, and the integration of alerts into workflows. The MDR expects manufacturers to conduct usability engineering consistent with IEC 62366-1. This includes identifying critical tasks, conducting formative and summative usability tests, and ensuring that users understand the device’s limitations. For AI, usability testing should include representative users and realistic clinical scenarios to assess how users interpret and act on the AI’s outputs.
Post-market surveillance and market surveillance
Post-market surveillance (PMS) is a continuous obligation. Manufacturers must collect and analyse data on device performance, adverse events, and user feedback. For AI, PMS should include monitoring of model performance in the field, detection of drift, and management of updates. The MDR requires a post-market clinical follow-up (PMCF) plan for many devices, which may include registries, real-world studies, or targeted data collection. The PMS data feed into periodic safety update reports (PSURs) for Class IIa, IIb, and III devices.
Market surveillance is carried out by national authorities. They may request documentation, conduct audits, and take corrective actions if devices present risks. Manufacturers should be prepared to respond to regulatory queries and to demonstrate ongoing compliance. For AI, this includes providing evidence of how updates are managed and how performance is monitored.
Labelling, IFU, and transparency
Labelling must include the intended purpose, classification, manufacturer details, and necessary warnings and precautions. For AI, instructions for use should clearly describe the device’s capabilities and limitations, the data it requires, the context of use, and the meaning of outputs. It should also specify the level of human oversight and the actions users should take based on the device’s outputs. The MDR prohibits misleading claims; manufacturers must ensure that marketing materials align with the intended purpose and technical documentation.
EU-level regulations and national implementation
The MDR is an EU regulation and applies directly in all Member States. However, national implementation aspects remain relevant. Competent authorities (e.g., BfArM in Germany, ANSM in France, AIFA in Italy, MHRA in the UK for Great Britain) oversee market surveillance and enforcement. Notified bodies are designated by national authorities and audited under the MDR’s oversight framework. While the rules are harmonised, the practical experience with notified bodies can vary. Some notified bodies have deeper expertise in software and AI than others, and their interpretation of classification rules may differ slightly. Manufacturers should consider notified body selection as part of their regulatory strategy.
Post-market surveillance plans and reports are submitted to national authorities according to MDR timelines. For example, PSURs are required for Class IIa, IIb, and III devices and must be made available to competent authorities upon request. Vigilance reporting (serious incidents and field safety corrective actions) is harmonised but handled nationally. Manufacturers should establish robust vigilance processes and ensure that their EU authorised representative is appropriately engaged.
It is also important to consider the relationship between the MDR and other EU frameworks. The AI Act (Regulation (EU) 2024/1689 on artificial intelligence) will apply from 2026, with phased transition. AI systems that are medical devices will be subject to both regimes. The AI Act classifies high-risk AI systems and sets obligations for risk management, data governance, transparency, human oversight, and conformity assessment. For medical devices, the AI Act’s conformity assessment may be integrated with the MDR procedures. Manufacturers should anticipate convergence and ensure that their technical documentation addresses both sets of requirements. Similarly, GDPR obligations for data protection must be met throughout the data lifecycle, including training, validation, and real-world monitoring.
UK and other European contexts
After Brexit, the UK operates its own regime. In Great Britain, the Medical Devices Regulations 2002 (as amended) apply, and the UKCA mark is required. The UK has proposed reforms to align more closely with the MDR while introducing some national nuances. In Northern Ireland, the MDR continues to apply under the Windsor Framework. Switzerland, as a non-EU member, has its own medical device legislation that closely mirrors the MDR. Manufacturers placing devices on these markets must engage with local requirements and appoint appropriate local representatives where needed.
AI-specific compliance considerations in practice
AI introduces challenges that are not always present in traditional devices. The MDR’s general safety and performance requirements address these implicitly, but manufacturers must make their approach explicit in technical documentation.
