< All Topics
Print

AI Inventories and Public Registers

The operationalisation of the European Union’s Artificial Intelligence Act (AI Act) introduces a rigorous governance structure for high-risk artificial intelligence systems. Within this structure, the maintenance of an internal AI inventory and the interaction with public-facing registers constitute a foundational compliance activity for public authorities and private entities acting as public contractors. While often perceived as a bureaucratic exercise, the strategic value of these registries extends beyond mere compliance; they serve as a critical tool for risk management, transparency assurance, and the facilitation of fundamental rights impact assessments. For professionals managing AI and data systems within European public institutions, understanding the distinction between the internal inventory required by Article 61 and the public register mandated by Article 71 is essential to navigating the regulatory landscape efficiently.

The Regulatory Foundation: Article 61 and Article 71

To understand how to maintain these records with minimal friction, one must first grasp the legal distinction between the two primary mechanisms established by the AI Act. The framework differentiates between the need for internal oversight (the inventory) and the need for public transparency (the register).

Article 61: The Internal AI Inventory

Article 61 of the AI Act mandates that high-risk AI systems providers and deployers maintain an electronic list of high-risk AI systems. This is an internal compliance document. The primary purpose is to ensure that the institution has a complete overview of all AI systems in use that fall under the high-risk categories defined in Annex III. This inventory is the prerequisite for the conformity assessment and the application of the CE marking for providers, or the verification of supplier compliance for deployers (public administrations).

Article 71: The Public AI Register

Article 71 requires Member States to establish a publicly accessible online database for high-risk AI systems used by public authorities. This is distinct from the internal inventory. It is a public transparency tool. Public authorities (deployers) are obligated to register high-risk AI systems in this database before putting them into operation. This register is managed by the European Commission, but the data entry is the responsibility of the national public bodies using the systems.

Defining the Scope: What Goes Where?

A common source of administrative burden is the confusion regarding scope. Not every algorithm or digital tool requires registration. The AI Act focuses on “high-risk” systems as defined in Article 6 and Annex III.

High-Risk Systems

These are systems that pose significant risks to the health, safety, or fundamental rights of persons. For public institutions, this typically includes:

  • Critical Infrastructure: AI used in the management of water, energy, or transport networks.
  • Education and Vocational Training: Systems determining access to education or evaluating learning outcomes (e.g., proctoring software or grading algorithms).
  • Employment and Workers Management: AI used for recruitment, promotion, or termination decisions.
  • Access to Essential Services: Systems used by public bodies to evaluate creditworthiness or determine eligibility for social benefits.
  • Law Enforcement and Migration: Polygraphs, risk assessment tools, or lie detectors.

These systems must be listed in the internal inventory and registered in the public database. However, low-risk or limited-risk systems (such as spam filters or simple chatbots) do not trigger the Article 61 or 71 obligations, though they may fall under transparency obligations in Article 50.

Operationalizing the Internal Inventory (Article 61)

The goal of the internal inventory is to create a “single source of truth” for the organization’s AI landscape. To maintain this with minimal bureaucracy, the process must be integrated into existing IT governance and procurement lifecycles rather than treated as a separate legal task.

Integration with Procurement

The most effective way to minimize administrative overhead is to introduce AI classification into the procurement process. When a public institution issues a tender for software or services, the tender documents should require the vendor to self-assess the risk level of the proposed solution based on Annex III.

Practical Tip: Include a mandatory clause in your Request for Proposal (RFP): “The supplier must declare if the solution qualifies as a high-risk AI system under the AI Act. If affirmative, the supplier must provide the technical documentation required for conformity assessment.”

This prevents the discovery of high-risk systems only after deployment, which creates significant retroactive compliance costs.

Data Sources for the Inventory

To avoid manual data entry, the inventory should be populated via automated feeds where possible. The inventory should capture:

  1. System Identification: Name, version, and unique identifier.
  2. Provider Information: Name and contact of the developer.
  3. Deployer Information: The specific department using the system.
  4. Purpose: A clear description of the intended use.
  5. Risk Classification: The specific Annex III category justifying the high-risk status.
  6. Conformity Status: Whether the system has a CE mark or is undergoing assessment.

By linking this data to existing asset management systems (CMDB), the burden on legal teams is reduced, and IT teams can manage the inventory as part of their standard operations.

Managing the Public Register (Article 71)

The public register is where the friction often increases due to the visibility of the data. Public institutions must balance transparency with the protection of sensitive information (e.g., trade secrets of the vendor or security details of the system).

What Information is Public?

The AI Act mandates that the public register contain specific information to ensure citizens are informed about the AI systems used by their government. This includes:

  • The name of the public authority deploying the system.
  • The purpose of the system.
  • The provider’s name.
  • The type of AI system and its risk category.
  • The date of deployment and the period of use.

Handling Sensitive Data

While transparency is key, Article 71 allows for derogations regarding the protection of sensitive information. Public institutions must establish a clear internal protocol for redacting information before it is uploaded to the public database. For example, specific technical parameters that could reveal security vulnerabilities or proprietary algorithms should be generalized. However, the core purpose and the identity of the deployer cannot be redacted, as this defeats the purpose of public accountability.

Frequency of Updates

The regulation requires that the register be updated “without delay” when a system is put into service, modified, or retired. To minimize bureaucracy, this update should be triggered automatically by the decommissioning or deployment workflows in the IT system. If a system is removed from the internal inventory, a flag should prompt the public register update.

Distinguishing EU-Level vs. National Implementation

While the AI Act is a Regulation (meaning it applies directly in all Member States), the mechanisms for the public register are subject to national implementation.

The Role of National Competent Authorities (NCAs)

Each Member State designates a market surveillance authority. In the context of public registers, the NCAs are responsible for supervising the accuracy of the entries. In some countries, like France or Germany, existing data protection authorities (CNIL or BfDI) may take on this role alongside new digital governance bodies.

Public institutions must be aware that while the format of the register is harmonized at the EU level (via the Commission’s database), the oversight is national. This means that interpretations of what constitutes a “high-risk” system might vary slightly until the European AI Office issues further harmonized guidelines.

Comparative Approaches: The “Digital Officer” Model

Different European countries are adopting varying administrative models to handle these obligations:

  • Centralized Models (e.g., Estonia): Highly digitized public sectors often utilize a central Chief Data Officer or Digital Officer who maintains a unified register for all government AI. This minimizes bureaucracy by centralizing expertise.
  • Decentralized Models (e.g., Spain/Italy): In regions with strong local autonomy, individual ministries or regional governments maintain their own registers. This requires robust inter-agency coordination to ensure consistency.

For institutions in decentralized systems, the challenge is avoiding duplication of effort. It is advisable to adopt a federated approach where a central template is used, but data entry is distributed, with a central oversight body performing periodic audits.

Minimizing Bureaucracy: The “Compliance by Design” Approach

The perception of the AI inventory and register as a bureaucratic burden usually stems from treating them as “after-the-fact” documentation tasks. To achieve minimal bureaucracy, the process must be inverted: documentation must happen during development or procurement, not after.

Automating Documentation

Modern AI governance platforms can automatically ingest metadata from AI models. For public institutions using cloud-based AI services (SaaS), the vendor should be contractually obligated to provide the data required for the register in a machine-readable format (e.g., JSON or XML). This allows the public institution to upload the data directly into the national database without manual typing.

The Role of Templates

Standardizing the description of AI systems is crucial. Public institutions should develop internal templates for “AI Use Case Statements.” These templates force project managers to answer the specific questions required by the AI Act early in the project lifecycle. If the template is not completed, the project cannot proceed to the funding or implementation stage. This “gatekeeping” approach ensures the inventory is always complete.

Interplay with GDPR

Public institutions are already familiar with Records of Processing Activities (RoPA) under GDPR Article 30. The AI inventory can be viewed as a specialized extension of the RoPA. By creating a unified data governance framework that links AI systems to the data they process, institutions can avoid maintaining two entirely separate silos of information. The AI inventory should reference the relevant Data Protection Impact Assessment (DPIA), as many high-risk AI systems will trigger a DPIA requirement.

Risk Management and Fundamental Rights Impact Assessments (FRIA)

The inventory is not just a list; it is the input for risk management. Under Article 27, deployers of high-risk AI systems must conduct a Fundamental Rights Impact Assessment before putting the system into use.

Using the Inventory for FRIA

The inventory allows an institution to map out which systems require a FRIA. For example, if a municipality registers a system for allocating social housing, the inventory triggers the requirement to assess the impact on the fundamental rights to housing and non-discrimination. Without a comprehensive inventory, an institution may fail to conduct a mandatory FRIA, leading to regulatory sanctions.

Identifying Systemic Risks

By aggregating data from the inventory, public institutions can identify systemic risks. If a city uses ten different AI systems from the same provider, and that provider faces a compliance audit or a data breach, the inventory allows the institution to immediately identify all affected services. This turns the inventory from a compliance burden into a resilience tool.

Timeline and Transition

The AI Act applies in a staggered manner. It entered into force in mid-2024, but the obligations for high-risk systems and public registers are phased in.

Key Timelines for Public Institutions:

  • 24 Months (approx. mid-2026): The provisions regarding high-risk systems (Annex III) become applicable. Public institutions must have their inventories in place and registers populated for systems already in use.
  • 6 Months (approx. mid-2025): Prohibitions on unacceptable risk AI apply. While not directly related to the register, this requires institutions to ensure they are not using banned systems.

It is crucial for institutions to start the audit process now. Waiting until the enforcement date creates a risk of “compliance debt,” where the administrative workload becomes unmanageable due to the volume of undocumented systems.

Strategic Interpretation for AI Practitioners

For the technical teams building or deploying these systems, the inventory and register are not merely legal artifacts; they are specifications for system architecture.

Technical Documentation Alignment

The information required for the internal inventory (Article 61) overlaps significantly with the technical documentation required for the conformity assessment (Annex IV). Practitioners should ensure that the engineering documentation—logging, monitoring, and explainability features—is designed to populate the inventory automatically. For instance, the “logging” capabilities of an AI system should be able to export a report that matches the inventory fields.

Interpretation of “Substantial Modification”

A common point of confusion is when an update to an AI system triggers a re-registration. The AI Act defines “substantial modification” as any change to the system that alters its performance or its intended purpose. Practitioners must establish a threshold for what constitutes a substantial modification. A minor bug fix does not require re-registration, but a retraining of the model that significantly changes the error rate or bias profile does. This decision-making process should be documented within the inventory system itself.

Conclusion on Operational Strategy

The maintenance of AI inventories and public registers is a structural requirement that bridges the gap between technical development and legal accountability. By integrating these requirements into the procurement lifecycle, utilizing automated data feeds, and aligning them with existing GDPR workflows, public institutions can transform a perceived bureaucratic burden into a strategic asset for digital governance. The focus must remain on the quality of the data and the timeliness of the updates, ensuring that the public register serves its true purpose: fostering trust in the use of AI within the public sector.

Table of Contents
Go to Top