Intended Use: The Switch That Changes Legal Obligations
In the complex ecosystem of European product regulation, few concepts carry as much weight, yet remain as susceptible to linguistic nuance, as “intended use.” For developers and manufacturers of AI-enabled products, robotics, and connected medical or industrial systems, the precise wording used to describe a product’s purpose is not a mere marketing exercise. It is a fundamental determinant of the product’s legal classification, the applicable regulatory framework, the required conformity assessment procedures, and ultimately, the scope of manufacturer liability. A shift in a single word in a product manual, marketing brochure, or technical specification can transform a product from a low-risk consumer tool into a high-risk medical device, or from a simple software utility into a regulated high-risk AI system under the EU AI Act. This article examines the mechanics of intended use, exploring how it acts as the switch that dictates legal obligations across European frameworks.
The Legal Anatomy of Intended Use
European regulatory frameworks are built upon the concept of the “reasonable expected use” of a product. The definition of intended use is not merely what the manufacturer explicitly states, but what can be reasonably inferred from the product’s design, packaging, and accompanying documentation. It is the bridge between the technical reality of the product and the legal reality of the market.
Under the General Product Safety Directive (GPSD) and its successor, the General Product Safety Regulation (GPSR), intended use is implicit in the requirement that products be safe when used normally. However, in highly regulated sectors, the definition becomes explicit and legally binding. The manufacturer’s declaration of intended use sets the boundaries of the conformity assessment. If a manufacturer claims their product is intended for “entertainment,” it may fall under general product safety rules. If they claim it is intended for “occupational safety,” it may trigger the Machinery Directive or specific PPE regulations. If they claim it is for “diagnostic purposes,” the Medical Device Regulation (MDR) applies.
Key Principle: The intended use is defined by the manufacturer’s objective statements in the technical documentation, labelling, and instructions for use. However, the manufacturer must also consider any foreseeable misuse that could reasonably be expected.
The European Court of Justice has consistently reinforced that the manufacturer’s intent, as expressed through labelling and instructions, is the primary determinant. However, this is not a shield against liability. If a product is objectively capable of performing a function that the manufacturer omits from the labelling, but which is obvious to the user, regulators may still classify it based on that capability. This is particularly relevant for AI systems, which often possess general-purpose capabilities that can be adapted for specific high-risk uses.
The AI Act: Intended Use as the Gatekeeper of Risk
The EU Artificial Intelligence Act (AI Act) introduces a rigorous risk-based approach where the concept of intended use is the primary gatekeeper. The classification of an AI system as high-risk is not automatic for all AI; it depends entirely on the specific purpose for which the system is marketed.
Annex I of the AI Act lists specific use cases that automatically classify an AI system as high-risk. These are predominantly found in Annex I, Section 1 (Safety components of products covered by Union harmonisation legislation, such as the Machinery Regulation, Medical Devices Regulation, etc.) and Section 2 (Specific standalone AI systems with prohibited safety risks). The critical factor is the intersection between the AI system’s function and the product it is integrated into.
Marketing Wording vs. Technical Reality
Consider an AI-based camera system. If the manufacturer markets the system as an “AI-powered home security camera,” it likely falls under general consumer product legislation (low risk). However, if the same hardware and software are marketed as an “AI-based system for monitoring critical infrastructure,” it falls squarely within the high-risk category of the AI Act, requiring a conformity assessment, risk management systems, and data governance protocols.
This distinction creates a significant compliance burden. Manufacturers must draft their marketing materials and technical documentation with legal precision. The AI Act explicitly defines the “provider” as the person or entity that develops an AI system with a view to placing it on the market under their own name or trademark. Therefore, the marketing claim defines the provider’s obligations.
General Purpose AI (GPAI) and the Downstream Risk
The AI Act introduces a specific category for General Purpose AI (GPAI) models. A GPAI model is defined by its capability to serve a variety of purposes. However, if a provider markets a GPAI model for a specific purpose that is listed as high-risk in Annex I, the provider must comply with the high-risk obligations for that specific use case.
For example, a large language model (LLM) might be marketed generally as a “text generation tool.” This is likely not high-risk. However, if the provider explicitly markets the same model as a “tool for screening job applicants” (which falls under the employment AI use case in Annex III), the provider must ensure the model meets the high-risk requirements, including bias testing and human oversight, even if the underlying model architecture remains unchanged.
The “Switch” Mechanism: The transition from a general-purpose tool to a high-risk system is triggered by the objective characteristics of the product’s marketing. This forces AI developers to implement a “regulatory by design” approach, where the intended use is mapped against the AI Act’s annexes during the earliest stages of development.
Medical Devices: The MDR and IVDR Context
In the medical sector, the Medical Device Regulation (MDR) and In Vitro Diagnostic Regulation (IVDR) rely heavily on the intended purpose to determine the classification rules. The intended purpose is defined in the technical documentation and determines the device’s class (Class I, IIa, IIb, or III), which dictates the level of scrutiny by Notified Bodies.
For AI-enabled medical software (SaMD), the wording is critical. A software algorithm that analyzes X-ray images can be classified differently based on its intended use:
- Intended Use: “Image enhancement.” This is generally considered a Class I (low risk) administrative tool. It aids the physician but does not make the diagnosis.
- Intended Use: “Computer-aided detection (CADe) to highlight suspicious areas.” This is often Class IIa. It provides information to support the diagnostic process.
- Intended Use: “Computer-aided diagnosis (CADx) to provide a differential diagnosis.” This is likely Class IIb or III, as it directly impacts clinical decision-making and carries a high risk of error.
Changing the intended use post-market requires a new conformity assessment. A manufacturer cannot simply issue a software update to expand the indication without re-evaluating the risk classification. This is a common pitfall for agile software developers accustomed to iterative deployment cycles.
Software as a Medical Device (SaMD) and AI
The MDR’s classification rules for SaMD are based on the severity of the disease condition and the impact on the patient. AI algorithms that process physiological data are subject to strict scrutiny. If an AI is intended to monitor blood glucose levels for a Type 1 diabetic, it is high-risk. If the same algorithm is used for “wellness” tracking of blood glucose for a non-diabetic population, it is not a medical device at all.
The distinction lies in the medical purpose. The AI Act and MDR are increasingly intertwined. An AI system that is a medical device is automatically considered high-risk under the AI Act (as it falls under Union harmonisation legislation). Therefore, the intended use triggers a dual compliance burden: the specific requirements of the MDR and the horizontal requirements of the AI Act (data quality, transparency, human oversight).
Machinery and Robotics: The Machinery Regulation
The new Machinery Regulation (EU) 2023/1230, which replaces the Machinery Directive, places significant emphasis on the “intended use” regarding safety components. For robotics and automated systems, the intended use determines whether a machine is “partly completed” or “fully finished,” and whether safety functions are “highly reliable.”
Consider an autonomous mobile robot (AMR) used in a warehouse. The manufacturer must define the operational environment and the tasks. If the intended use includes “interaction with human workers,” the machine must meet stringent safety requirements regarding speed, detection, and stopping distances. If the intended use is “strictly segregated zones,” the requirements differ.
Furthermore, the Machinery Regulation explicitly addresses software. If an AI system is intended to perform safety functions (e.g., an AI vision system that detects human presence to stop a robot), that software becomes a safety component. The manufacturer must demonstrate that the AI system is robust, predictable, and free from bias, which is notoriously difficult for deep learning systems. The intended use statement here must be precise: “Safety function: Obstacle detection” vs. “Assistive function: Path optimization.”
Collaborative Robots (Cobots)
For collaborative robots, the intended use is the basis for the “collaborative operation” mode. The manufacturer must specify the type of collaboration (e.g., safety-rated monitored stop, hand guiding, speed and separation monitoring). If the manufacturer markets the robot as capable of “unrestricted collaboration,” they assume liability for any injury resulting from the limitations of the safety sensors. If the marketing suggests the robot can work “side-by-side” with humans without safety cages, but the technical documentation restricts it to “separated workspaces,” the discrepancy creates a massive liability gap.
Product Liability and the Defective Product
The concept of intended use is central to the Directive on liability for defective products (Product Liability Directive – PLD). Under the PLD, a product is defective if it does not provide the safety which a person is entitled to expect, taking all circumstances into account, including the presentation of the product (marketing) and the use for which it could reasonably be expected.
If an AI system is marketed with exaggerated capabilities, the “expectation of safety” increases. For instance, if a facial recognition system is marketed as “infallible security,” a failure to identify a threat might be considered a defect. Conversely, if the system is marketed as “assistive verification,” the expectation of safety is lower.
The European Commission is currently revising the PLD to explicitly cover AI and digital products. The revised proposal emphasizes that the “intended use” includes any reasonably foreseeable misuse. For AI, this is a profound challenge. “Hallucinations” (confident but false outputs) in generative AI might be considered a defect if the intended use is “fact-based reporting,” but might be acceptable if the intended use is “creative writing assistance.”
Reasonably Foreseeable Misuse
Manufacturers are not liable for misuse that is not reasonably foreseeable. However, in the context of AI, the boundary is blurry. If a manufacturer sells a general-purpose AI chatbot, is it reasonably foreseeable that users will use it to generate legal advice? If yes, the manufacturer might be liable for the consequences of that advice, unless they have explicitly restricted the use in the documentation.
This necessitates a robust “Intended Use Statement” that actively discourages high-risk applications. However, this conflicts with the commercial drive to market AI as versatile. The legal obligation to mitigate liability often requires limiting the scope of intended use, which reduces market appeal.
Practical Examples: The Power of Wording
To illustrate the practical impact, let us analyze three scenarios involving similar AI-enabled hardware.
Scenario 1: The Smart Watch
Product: A wrist-worn device with an optical sensor and AI algorithm.
Wording A: “A wellness device for tracking steps, sleep, and heart rate. Not a medical device.”
Legal Status: Consumer product. Low regulatory burden. General safety requirements only.
Wording B: “A medical device intended to monitor heart rhythm and detect atrial fibrillation for diagnostic purposes.”
Legal Status: Class IIa Medical Device (likely). Requires MDR conformity assessment, clinical evaluation, CE marking, and registration in EUDAMED. Also High-Risk AI.
Impact: Millions of Euros in compliance costs, years of delay, and significantly higher liability exposure.
Scenario 2: The Drone
Product: An autonomous drone with computer vision.
Wording A: “Recreational drone for aerial photography.”
Legal Status: Toy or consumer drone. Subject to general safety and drone regulations (specific categories based on weight). Low risk.
Wording B: “Industrial inspection drone for assessing structural integrity of bridges.”
Legal Status: Professional equipment. Likely falls under the Machinery Regulation if it has safety-critical inspection functions. High risk. Requires rigorous testing and documentation regarding the reliability of the AI vision system.
Impact: The manufacturer must ensure the AI does not miss cracks. If the drone is marketed for “safety-critical inspection,” a missed defect could lead to catastrophic failure and massive liability.
Scenario 3: The Chatbot
Product: A generative AI language model.
Wording A: “Creative writing assistant for brainstorming ideas.”
Legal Status: General Purpose AI (GPAI). Likely not high-risk under the AI Act, subject to transparency obligations and copyright compliance.
Wording B: “Automated customer service agent for handling banking complaints and financial advice.”
Legal Status: High-Risk AI System (Annex III, point 5(b) – Access to essential private services). Requires risk management, data governance, human oversight, and accuracy testing. The provider must ensure the AI does not provide hallucinated financial advice.
Impact: The transition from a creative tool to a financial service tool triggers the full suite of AI Act obligations.
Regulatory Strategy: Managing the Switch
Given the high stakes, how should professionals manage intended use?
1. Regulatory Scoping by Design
Before writing a line of code, the product team must map potential intended uses against regulatory annexes. This “Regulatory Scoping” identifies the “sweet spot” where the product’s utility is maximized while the regulatory burden is minimized. It may involve deliberately restricting the software’s capabilities in the code to avoid triggering high-risk classification.
2. Documentation Discipline
The technical documentation is the legal anchor. It must contain a clear, unambiguous intended use statement. This statement should be reviewed by legal counsel. Marketing materials must mirror this statement exactly. Any deviation between marketing claims and technical documentation is a red flag for regulators.
3. Post-Market Surveillance (PMS)
Under the MDR, AI Act, and Machinery Regulation, manufacturers must monitor the product in the market. If users are consistently using a product in a way that was not intended (foreseeable misuse), the manufacturer may be required to update the instructions, modify the software, or even issue a field safety corrective action. The intended use is not static; it evolves with user behavior.
4. The Role of the Notified Body
For high-risk products, the Notified Body acts as an independent auditor. They will scrutinize the intended use statement. If they believe the manufacturer is under-classifying the product by using vague wording (e.g., “wellness” instead of “diagnosis”), they will reject the application. Engaging a Notified Body early in the development process can clarify the necessary wording.
Conclusion: The Weight of Words
In the European regulatory landscape, the intended use is the pivot point upon which the entire legal framework turns. It is a declaration of responsibility. For AI-enabled products, where capabilities are often emergent and difficult to predict, defining intended use requires a delicate balance between ambition and caution. The manufacturer must be the master of the product’s narrative, ensuring that the story told to the market aligns perfectly with the technical reality and the legal obligations. A failure to align these elements results in regulatory delays, market exclusion, and potentially ruinous liability. The switch is flipped by the words chosen; the consequences are defined by the law.
