Foundation Models and GPAI: What Changes Under the EU AI Act
The regulatory landscape for artificial intelligence in Europe has fundamentally shifted with the adoption of Regulation (EU) 2024/1689, commonly known as the AI Act. For organizations developing or deploying general-purpose AI (GPAI) and foundation models, this introduces a new category of obligations that diverges significantly from the risk-based approach applied to traditional AI systems. While the Act is a harmonizing instrument at the EU level, its practical application will involve coordination with national authorities and the interplay of existing frameworks such as the GDPR, the Digital Services Act (DSA), and product safety legislation. Understanding how the Act defines, classifies, and regulates these powerful models is essential for compliance and strategic planning.
Defining the Scope: GPAI and Foundation Models
One of the most consequential aspects of the AI Act is the introduction of a dedicated regime for general-purpose AI models. The legislator chose to regulate the model itself—the pre-trained system—rather than only the specific AI system that incorporates it. This is a crucial distinction. A foundation model, as defined in the recitals, is an AI model that is “trained on a broad range of data, at scale, usually by self-supervision, and that can be adapted to a wide range of downstream tasks.” General-purpose AI models are those that display significant generality and are capable of competently performing a wide range of distinct tasks, regardless of how they are placed on the market.
The Act explicitly excludes models that are released under a free and open-source license, unless they are considered to pose a systemic risk. This carve-out is intended to foster innovation and transparency in the open-source community, but it requires careful assessment. If an open-source model is specifically designed to generate content that is objectively indistinguishable from human-generated content or to facilitate disinformation, it may still fall under the stricter obligations.
The Distinction Between GPAI Models and AI Systems
It is vital to distinguish between a model and a system. A GPAI model is the underlying engine, often developed by a large technology provider. An AI system is the product or service that integrates this model to perform a specific function, such as a chatbot, a recruitment tool, or a medical imaging diagnostic system. The AI Act places the heaviest compliance burden on the provider of the GPAI model, but it also imposes obligations on those who integrate such models into their own high-risk AI systems.
For most European companies, this means they will be deployers of GPAI models rather than providers. However, if a company significantly modifies a GPAI model or fine-tunes it in a way that changes its intended purpose and introduces new risks, it may be considered a provider of a new AI system and trigger corresponding obligations.
Obligations for General-Purpose AI Model Providers
The obligations for providers of GPAI models are tiered. All providers must comply with a set of baseline requirements, while those whose models present systemic risks face additional, more stringent duties. The European AI Office, established within the European Commission, will be the primary enforcer for GPAI models at the EU level, coordinating with national authorities.
Baseline Obligations for All GPAI Providers
Regardless of systemic risk classification, providers of GPAI models must ensure compliance with several key obligations before placing them on the EU market. These are designed to ensure a high level of safety, transparency, and intellectual property (IP) protection.
- Technical Documentation: Providers must draw up comprehensive technical documentation that details the model’s capabilities, limitations, and the data used for training, tuning, and testing. This must be provided to the AI Office upon request.
- Information and Documentation for Downstream Providers: Providers must supply information and documentation to downstream providers (i.e., those integrating the model into their AI systems) to enable them to comply with their own obligations, particularly regarding high-risk AI systems.
- Copyright Compliance: A novel and specific requirement is the adoption of a policy to comply with EU copyright law. This includes implementing a mechanism to ensure that content generated by the model does not infringe on protected works, and to respect rights-holders’ reservations under the Text and Data Mining (TDM) exceptions in the Copyright Directive.
- Summaries of Training Data: Providers must publish a summary of the content used for training the model. This is intended to increase transparency and assist rights-holders in identifying potential infringements.
These obligations place a significant administrative and technical burden on model developers, requiring them to document their development processes with a level of detail that may be novel for many organizations.
Identifying and Mitigating Systemic Risks
The concept of systemic risk is central to the Act’s approach to GPAI. A systemic risk is defined as a risk that is specific to the high-impact capabilities of a GPAI model, having the potential to cause large-scale harm to public health, safety, security, fundamental rights, or society at large. The Act establishes a rebuttable presumption that a model with capabilities exceeding those of the most advanced models (to be designated by the Commission) presents a systemic risk.
Providers must assess, throughout the model’s lifecycle, whether their model presents a systemic risk. This assessment is not a one-off event. If a provider identifies or becomes aware of a systemic risk, they must inform the AI Office without delay.
Key Interpretation: The determination of systemic risk is not solely based on the model’s size or parameter count. It hinges on the capabilities and the potential for misuse or unintended effects at scale. This requires a forward-looking, qualitative assessment.
Additional Obligations for Models with Systemic Risk
Providers of GPAI models identified as presenting systemic risks must go beyond the baseline obligations. They are subject to a set of enhanced requirements aimed at proactively managing and mitigating these risks.
- Risk Management System: A dedicated risk management system must be established, with a particular focus on identifying and mitigating potential systemic risks. This includes conducting evaluations and, where appropriate, adversarial testing (red-teaming).
- Incident Reporting: Providers must report serious incidents to the AI Office and, where relevant, national authorities. This is a departure from the standard product liability framework and requires robust internal detection and reporting mechanisms.
- Ensure Adequate Cybersecurity Protection: Given the potential for model theft or misuse, providers must implement state-of-the-art cybersecurity measures to protect the model, its parameters, and the data it processes.
- Cooperation with the AI Office: The AI Office has the power to request access to the model for evaluation and to mandate specific risk mitigation measures. Providers are legally obliged to cooperate.
The enforcement of these obligations will be a major test for the new European AI Office, which must balance the need for oversight with the preservation of Europe’s competitiveness in AI development.
Integration of GPAI into High-Risk AI Systems
While the provider of the GPAI model has specific duties, the obligations for high-risk AI systems remain largely unchanged. However, the integration of a GPAI model into a high-risk system introduces new challenges for the provider of that system. The provider of the high-risk AI system must ensure that the system as a whole complies with the Act’s requirements.
This includes ensuring that the GPAI model it uses is compliant, particularly regarding the information required for the technical documentation of the high-risk system. The provider of the high-risk system must also assess the impact of the GPAI model’s inherent characteristics, such as its potential for hallucinations or emergent behaviors, on the system’s safety and fundamental rights impact.
For example, a company using a GPAI model for a high-risk recruitment tool must demonstrate that it has conducted its own conformity assessment, taking into account the specific risks posed by the underlying model. This may require additional testing and validation that goes beyond the documentation provided by the GPAI model provider.
Practical Steps for Organizations
Organizations operating in Europe must take a proactive approach to the AI Act’s GPAI regime. The timeline for implementation is staggered, with the rules for GPAI models applying 12 months after entry into force, and the systemic risk obligations applying 36 months after entry into force. However, the preparatory work must begin now.
For Providers of GPAI Models
Organizations developing or placing GPAI models on the EU market should immediately begin to:
- Map their model portfolio: Identify which models fall under the definition of GPAI and assess their potential to be classified as systemic risks.
- Establish governance for compliance: Designate responsibility for AI Act compliance at a senior level. This is not just a task for the legal department; it requires close collaboration between legal, technical, and risk management teams.
- Develop documentation processes: Start building the systems needed to generate the required technical documentation and training data summaries. This will likely involve creating new internal data collection and reporting mechanisms.
- Review copyright policies: Scrutinize current data acquisition and model training practices to ensure they align with the new copyright requirements. This may involve negotiating new licenses or implementing more robust filtering systems.
- Engage with the AI Office: Monitor the guidance and implementing acts to be issued by the AI Office. Participate in consultations to shape the practical application of the rules.
For Deployers and Integrators
Companies that use GPAI models as part of their products or internal processes must:
- Conduct due diligence on providers: When selecting a GPAI model to integrate, especially for high-risk applications, assess the provider’s compliance with the AI Act. Request the necessary documentation and information.
- Update risk management frameworks: Incorporate the risks associated with GPAI models (e.g., accuracy, bias, potential for misuse) into existing risk management and compliance frameworks.
- Prepare for high-risk system obligations: If deploying a high-risk AI system, ensure that the conformity assessment process accounts for the specific characteristics of the integrated GPAI model.
- Monitor for incidents: Implement procedures to detect and report incidents related to the use of GPAI models, in line with the Act’s requirements.
National Implementation and Enforcement Landscape
While the AI Act is a Regulation, directly applicable in all Member States, its enforcement will be a shared responsibility. The AI Office will oversee GPAI models, but national competent authorities will supervise the application of the rules for AI systems in specific sectors (e.g., healthcare, finance, transport). This creates a complex, multi-level governance structure.
Each Member State is required to designate or establish a national competent authority and a market surveillance authority. The interaction between these national bodies and the AI Office will be critical. For example, if a national authority detects a systemic risk originating from a GPAI model in a specific sector, it will need to coordinate with the AI Office, which holds the primary jurisdiction over the model provider.
There will likely be variations in how national authorities approach enforcement. Some may prioritize consumer protection and focus on deceptive or harmful applications, while others may emphasize the interests of businesses and innovation. Companies operating across multiple European markets should prepare for potential differences in interpretation and enforcement intensity, although the AI Office is tasked with ensuring a harmonized approach for GPAI models.
Interaction with Other EU Legislation
The AI Act does not operate in a vacuum. Its provisions on GPAI and foundation models will intersect with other key EU regulations, creating a web of compliance obligations.
The General Data Protection Regulation (GDPR) remains paramount for any processing of personal data. The AI Act’s requirements for data quality and bias mitigation complement, but do not replace, the GDPR’s principles of lawfulness, fairness, and transparency. A model trained on personal data in compliance with the AI Act must still have a valid legal basis under the GDPR.
The Digital Services Act (DSA) is particularly relevant for very large online platforms (VLOPs) and very large online search engines (VLOSEs) that deploy GPAI systems. The DSA already imposes obligations related to mitigating systemic risks, including those stemming from AI systems. The AI Act complements this by setting requirements for the models themselves. Providers of GPAI models used by VLOPs/VLOSEs will need to ensure they can support those platforms in meeting their DSA obligations.
Finally, the Product Liability Directive (and its proposed new Regulation) will govern civil liability for damages caused by defective products, including AI systems. The AI Act’s conformity assessment procedures will be a key factor in determining whether a product is considered defective in liability claims. The strict obligations for GPAI providers, especially those with systemic risks, will create a clear standard of care that will be referenced in liability disputes.
Looking Ahead: The Role of Standards and Code of Practice
The practical implementation of the GPAI regime will depend heavily on the development of harmonized standards and a Code of Practice. The European standardization organizations (ESOs) have been mandated to develop standards supporting the AI Act. These standards will provide technical specifications and conformity assessment procedures that, if followed, grant a presumption of conformity with the legal requirements.
For GPAI models, standards will be crucial in areas such as:
- Defining the methodology for systemic risk assessment.
- Specifying the content and format of technical documentation.
- Establishing best practices for copyright compliance in the context of large-scale data scraping.
- Guiding the evaluation and red-teaming of high-impact capabilities.
In parallel, the AI Office is tasked with encouraging and facilitating the creation of a Code of Practice for GPAI models. This Code of Practice, developed in consultation with stakeholders, will serve as a practical tool for providers to demonstrate compliance, particularly for the obligations related to systemic risks. It is expected to become a central reference point for the industry, potentially shaping the market standard for responsible GPAI development in Europe and beyond.
For organizations working with advanced AI, the path forward requires a dual focus: deep technical understanding of the models they build or use, and a sophisticated grasp of the evolving legal framework. The EU AI Act’s GPAI regime is not merely a compliance checklist; it is a new regulatory architecture for the most powerful technologies of our time.
