< All Topics
Print

Human-Robot Interaction Risks and Safety Boundaries

Risks in human-robot interaction (HRI) arise not only from mechanical failure or software error, but from the convergence of dynamic environments, uncertain models, and human expectations. In Europe, these risks are governed by a layered framework where the AI Act, the Machinery Regulation, and sector-specific directives (such as medical devices or machinery) interact with national rules and standards. For professionals designing, deploying, or supervising robotic and AI-enabled systems, defining safety boundaries and supervision is an exercise in legal compliance, systems engineering, and human factors design. It requires translating broad obligations into verifiable design choices and operational controls.

Mapping the European regulatory landscape for HRI

At the EU level, three pillars are central for HRI: the AI Act (Regulation (EU) 2024/1689), the Machinery Regulation (Regulation (EU) 2023/1230), and the Product Liability Directive (currently being revised via a proposed Directive to become a Regulation). Sectoral regimes also apply where relevant, notably the Medical Device Regulation (MDR) and In Vitro Diagnostic Regulation (IVDR), as well as the Radio Equipment Directive and Cyber Resilience Act (CRA). The overarching framework for data is the GDPR, which shapes what personal data can be processed for safety monitoring or operator profiling.

These instruments are complemented by harmonised standards developed by CEN-CENELEC and ISO/TC 299 (robotics). Standards are not law, but they provide presumption of conformity with legal requirements. For HRI, key standards include ISO 10218 (industrial robots), ISO/TS 15066 (collaborative robots), ISO 13482 (service robots safety), and ISO 12100 (risk management). The relationship between law and standards is practical: if you meet the relevant standard, you are presumed to meet the essential health and safety requirements of the Machinery Regulation and other applicable directives. Deviations are possible, but they require robust technical justification and evidence.

At the national level, member states designate market surveillance authorities for machinery and AI, and notified bodies for conformity assessment. Some countries have additional rules for specific use cases, such as autonomous mobile robots in public spaces or drones. For example, Germany’s Product Safety Act (ProdSG) and the DGUV rules for occupational safety influence how collaborative robots are integrated in workplaces. France and the Netherlands have specific guidance for autonomous delivery robots in public areas. These national implementations do not replace EU law but add operational constraints and enforcement nuances.

How the AI Act changes HRI risk classification

The AI Act introduces a risk-based approach that overlays traditional product safety. For HRI, most systems will be classified as either limited risk or high-risk. Limited risk systems, such as some collaborative assistants with deterministic behavior, have transparency obligations but fewer conformity burdens. High-risk AI systems include those used as safety components in machinery, in critical infrastructure, or in medical devices. If a robot’s control system qualifies as a safety component under the Machinery Regulation, or if it is used in a regulated sector like healthcare, it will likely be high-risk under the AI Act.

High-risk AI systems are subject to risk management, data governance, technical documentation, record-keeping, transparency, human oversight, and cybersecurity obligations. Conformity assessment may involve a notified body, depending on the specific annex and existing sectoral certification.

Importantly, the AI Act exempts certain AI systems intended exclusively for safety components that are already subject to EU product legislation and conformity assessment, provided that the relevant requirements are met through that existing pathway. In practice, this means a collaborative robot controller that is certified under the Machinery Regulation may not need a separate AI Act conformity assessment if it meets the overlapping requirements. However, documentation and governance obligations under the AI Act still apply and must be integrated into the technical file and quality management system.

Machinery Regulation and the definition of safety components

The Machinery Regulation (MR) replaces the Machinery Directive and applies from January 2027. It expands the scope to include more software-driven machinery and clarifies the obligations for partially completed machinery. For HRI, the MR defines essential health and safety requirements that address control reliability, stopping functions, ergonomic risks, and hazards from unexpected movements. A key concept is the safety component: a component whose failure can endanger people, and which is specifically designed to fulfill a safety function.

If a robot’s controller implements safety-rated functions such as safe torque off, speed and separation monitoring, or collision detection, it is likely a safety component. The MR requires that safety components undergo conformity assessment by a notified body unless specific modules allow self-certification. For collaborative robots, the MR and ISO/TS 15066 work together: the MR sets legal obligations, while the standard provides measurable thresholds for contact forces and pressures to prevent injury.

Defining safety boundaries in HRI

Safety boundaries are the operational and design limits within which the system remains acceptably safe. They are defined through risk assessment and realized through engineering controls. In HRI, boundaries can be spatial, temporal, kinematic, or cognitive. Spatial boundaries define zones around the robot where human presence changes the robot’s behavior. Temporal boundaries define reaction times and stopping distances. Kinematic boundaries limit speed, force, and momentum. Cognitive boundaries address the predictability of the robot’s actions to the human operator or bystander.

Spatial boundaries: zones and speed and separation monitoring

For industrial robots, safety zones are often implemented using light curtains, laser scanners, or safety cameras. The robot’s control system uses these inputs to switch between operational modes, such as reduced speed and/or limited force. In collaborative applications, ISO/TS 15066 defines two main approaches: power and force limiting and speed and separation monitoring. Power and force limiting ensures that any contact remains below injury thresholds (e.g., quasi-static and transient pressure limits on body parts). Speed and separation monitoring uses protective separation distances to maintain a safe gap between the human and the robot, dynamically adjusting speed as the distance changes.

Calculating the protective separation distance requires estimating the human’s approach speed, the robot’s stopping time, and a detection uncertainty margin. The formula in ISO/TS 15066 is often implemented in the robot controller. Practically, this means the robot must be able to detect a person within a defined field of view and reliably stop before the separation distance is breached. The safety boundary is not static; it adapts to the robot’s current speed and payload, and to environmental factors that affect sensor performance.

Kinematic boundaries: force, speed, and momentum limits

Defining kinematic boundaries requires understanding the biomechanical tolerance of humans. ISO/TS 15066 provides reference values for contact forces on different body parts. For example, a transient impact to the hand may be allowed at higher force than quasi-static pressure on the torso. The robot’s controller must enforce these limits by design, either through hardware constraints or software limits with safety integrity (e.g., SIL 2/PL d as per IEC 61508/ISO 13849).

For mobile robots, momentum limits are relevant. The robot’s mass and speed must be controlled to prevent dangerous impacts. This is particularly important for autonomous mobile robots (AMRs) in logistics or healthcare, where pedestrian traffic is unpredictable. Safety boundaries here involve maximum speed in mixed-use corridors, slower speeds near doorways, and automatic stops when obstacles are detected within a certain range.

Cognitive boundaries: predictability and feedback

Humans need to understand what the robot will do next. Unpredictable movements increase the likelihood of startle responses and unsafe reactions. Cognitive safety boundaries include clear signaling of intent, such as visual indicators of motion plans, audible warnings, and haptic feedback where appropriate. For collaborative robots, the robot should move in a way that is intuitive and smooth, avoiding sudden direction changes. In service robots, user interfaces must communicate operational states and safety constraints to operators and bystanders.

Transparency obligations under the AI Act reinforce this. High-risk systems must be designed to ensure that outputs are interpretable and that operators can understand the system’s limitations. This is not just a usability requirement; it is a safety requirement. If an operator cannot predict the robot’s behavior, they may override safety functions or place themselves in harm’s way.

Environmental boundaries: variability and uncertainty

HRI safety boundaries must account for environmental variability. Lighting changes can affect vision-based detection; electromagnetic interference can affect sensors; obstacles can occlude human detection. Risk assessment must consider these factors and define conservative boundaries that remain safe even under adverse conditions. This often leads to design choices such as redundant sensing (e.g., combining lidar and vision), fail-safe behaviors (e.g., stop on sensor loss), and periodic self-checks.

Supervision rules: human oversight and operational controls

Supervision in HRI combines human oversight with automated controls. The AI Act requires meaningful human oversight for high-risk systems, which means that humans can intervene or override the system when necessary. For robots, this translates into clear modes of operation, accessible emergency stops, and procedures for handling anomalies.

Modes of operation and mode switching

Industrial and collaborative robots typically have several modes: automatic, manual, teach, and emergency. Each mode has specific safety requirements. Mode switching must be deliberate and secured (e.g., key switch or password-protected access). In automatic mode, safety functions must be active and cannot be bypassed. In teach mode, reduced speeds and enabling devices are used to allow safe programming and maintenance.

For mobile robots, modes may include autonomous navigation, teleoperation, and maintenance. Supervision rules must define when and how the robot can switch modes, and under what conditions it must revert to a safe state. For example, if the communication link for teleoperation is lost, the robot should stop or return to a designated safe area.

Human oversight under the AI Act

Human oversight for high-risk AI systems must be designed to prevent misuse and to allow timely intervention. This includes:

  • Clear instructions on the system’s capabilities and limitations;
  • Training for operators on how to interpret system outputs and warnings;
  • Accessible override mechanisms that are effective under operational conditions;
  • Monitoring of system performance and logging of incidents for post-hoc review.

These obligations are not abstract. They require concrete design choices: the placement of emergency stops, the visibility of status indicators, the ergonomics of interfaces, and the procedures for incident reporting. In sectors like healthcare, oversight may involve clinical staff who must understand the robot’s role in patient care and the conditions under which it should not be used.

Operational controls: procedures and responsibilities

Operational controls are the procedures that govern how the system is used in practice. They include pre-use checks, calibration routines, maintenance schedules, and incident response plans. For HRI, it is essential to define responsibilities: who can authorize operation, who can modify safety parameters, and who investigates incidents. These procedures should be documented and integrated into the organization’s quality management system.

From a regulatory perspective, operational controls are part of the risk management system. The AI Act requires that risk management be a living process, updated throughout the lifecycle of the system. This means that operational controls must be reviewed after incidents, changes in environment, or updates to software and AI models.

Supervision in collaborative vs. autonomous contexts

Collaborative robots typically rely on continuous supervision because humans and robots share space. Supervision is built into the control system through real-time sensing and safety functions. Autonomous robots, such as AMRs or outdoor service robots, may operate with intermittent supervision. Here, supervision rules often include remote monitoring centers, automated alerts, and periodic human checks. The level of supervision should be proportional to the risk: higher speeds, heavier payloads, or operation near vulnerable populations require more stringent supervision.

Training: reducing incidents through competence

Training is a regulatory obligation and a practical risk control. The AI Act emphasizes that human oversight is only effective if operators are competent. For HRI, training must cover both the technical system and the organizational context in which it operates.

Operator training

Operators need to understand:

  • The robot’s functions and limitations;
  • How to start, stop, and override the system;
  • How to interpret warnings and error states;
  • How to work safely in shared spaces, including posture and movement patterns;
  • How to report incidents and near-misses.

Training should be practical, with hands-on exercises in the actual environment. For collaborative robots, this includes practicing movements that minimize risk and understanding the force limits. For mobile robots, it includes understanding navigation patterns and how to interact with the robot during tasks like loading or maintenance.

Maintenance and engineering training

Maintenance staff require deeper technical training to perform calibration, safety function tests, and software updates. They must understand how safety boundaries are implemented and how changes to parameters affect risk. For AI-enabled systems, they need to understand the impact of model updates and data drift on safety. Maintenance procedures should include verification of safety functions after any intervention.

Supervisor and manager training

Supervisors and managers need training on regulatory obligations, incident investigation, and risk management. They must ensure that procedures are followed and that resources are allocated for safety. In sectors with higher regulatory scrutiny (e.g., medical devices), they must also ensure that training records are maintained and auditable.

Training effectiveness and continuous improvement

Training is not a one-time event. Effectiveness should be measured through assessments, observation, and incident analysis. Updates to the system or changes in the environment should trigger refresher training. The AI Act’s requirement for continuous risk management implies that training programs should be reviewed and improved over time.

Risk assessment methodologies for HRI

Risk assessment is the foundation for defining safety boundaries and supervision rules. It must follow a recognized methodology, such as ISO 12100. The process includes hazard identification, risk estimation, and risk reduction measures. For HRI, specific hazards include unexpected movements, pinch points, collisions, and loss of control due to AI errors or sensor failures.

Hazard identification

Identify hazards across the lifecycle: installation, operation, maintenance, and disposal. Consider normal and fault conditions. For AI-enabled robots, include hazards from data bias, model drift, adversarial inputs, and cyberattacks. Involve multidisciplinary teams, including engineers, human factors specialists, and operators.

Risk estimation

Estimate risk by combining severity of harm and probability of occurrence. For HRI, probability is influenced by the frequency of human-robot contact, the effectiveness of detection systems, and the robustness of control. Use quantitative data where possible (e.g., stopping distances, force limits) and qualitative judgment where data is limited.

Risk reduction

Apply the hierarchy of risk reduction: inherently safe design, safeguarding and protective measures, and information for use. For HRI, this often means:

  • Designing kinematic limits and safe stopping functions;
  • Implementing spatial boundaries with sensors and safety zones;
  • Providing clear user interfaces and training;
  • Establishing supervision rules and operational procedures.

Document the risk assessment and keep it updated. Under the AI Act, risk management must be continuous, and records must be kept for the system’s lifetime plus a defined period.

Verification and validation

Verification checks that the system meets its design requirements; validation checks that it meets user needs and is safe in its intended environment. For HRI, this includes testing safety functions under normal and fault conditions, simulating human presence, and conducting usability tests. Results must be documented and traceable to requirements.

Technical implementation of safety boundaries

Implementing safety boundaries requires integrating hardware and software controls with safety integrity. The following elements are typical in HRI systems.

Safety-rated controllers and I/O

Safety functions should be implemented in safety-rated PLCs or dedicated safety controllers, not in standard control logic alone. Safety inputs (e.g., light curtains, emergency stops) and outputs (e.g., safe torque off) must meet the required performance level (PL) or safety integrity level (SIL). This ensures that the system reliably stops or reduces risk when boundaries are breached.

Redundancy and diversity

Redundancy (multiple sensors) and diversity (different sensing principles) reduce common-cause failures. For example, combining lidar and vision for human detection increases robustness. Redundancy must be managed carefully to avoid spurious trips, which can lead to operators bypassing safety functions.

Real-time monitoring and logging

Real-time monitoring of safety parameters (e.g., speed, force, separation distance) is essential. Logs should capture safety events, mode changes, overrides, and faults. These logs support incident investigation and regulatory compliance. Under the AI Act, logging also supports traceability for AI components.

Cybersecurity controls

Cybersecurity is a safety issue. The Cyber Resilience Act and AI Act require robust security measures, including secure boot, access control, encryption, and vulnerability handling. A cyberattack that disables safety functions can lead to catastrophic outcomes. Security must be designed into the system and maintained through updates.

Update management

Software and AI updates can change behavior and risk. Update processes must include impact analysis on safety boundaries, regression testing, and controlled deployment. For high-risk AI systems, significant changes may require re-conformity assessment. Operators must be informed of changes that affect safety or supervision requirements.

Case studies: applying HRI safety boundaries in practice

Illustrative scenarios help clarify how regulatory requirements translate into design and operational choices.

Collaborative robot workstation in manufacturing

A collaborative robot arm assists with assembly. The risk assessment identifies pinch hazards and unexpected movements. The design implements power and force limiting

Table of Contents
Go to Top