Ahead of the EU AI Act’s (the “Act”) General Purpose AI (“GPAI”) model requirements coming into force on 2 August 2025, EU authorities have released further guidance and Codes of Practice detailing how these rules should be interpreted and applied. In particular:
- GPAI Model Provider Guidance: The Commission has published additional guidance targeted specifically at GPAI model providers. While much of the guidance reiterates information already established in the Act, its recitals, and previous documentation, it does include some useful clarifications.
- Codes of Practice: Alongside the guidance, a body of independent experts also unveiled three long-anticipated voluntary Codes of Practice (the “Codes”) for GPAI models, which address key requirements on transparency, copyright compliance, and model safety and security respectively.
These documents apply primarily to entities developing GPAI models – either directly, through third-party contractors, or by making “substantial modifications” to existing models (as detailed further below). In practice, very few entities beyond large technology companies are actively involved in GPAI model development. Entities merely using GPAI models, or integrating them into downstream AI systems, fall outside the scope of the Act’s GPAI-specific obligations. Similarly, entities releasing existing models through open-weight distributions likely remain outside the GPAI requirements, provided their creation does not entail making “substantial modifications” to the underlying models.
Notably concise – two of the Codes span fewer than 10 pages each – these documents underline the substantial work still ahead for the EU in refining its comprehensive AI regulatory strategy. Nevertheless, amidst ongoing discussions regarding the future trajectory of the Act, the release of these Codes and guidance serves as a significant indicator of the EU’s evolving approach and likely regulatory priorities moving forward.
What are GPAI Models?
GPAI Models are AI models that display significant generality, are capable of competently performing a wide range of distinct tasks, and can be integrated into a variety of downstream systems or applications.
The EU AI Act’s GPAI model requirements are set out in Chapter V, and they come into effect on 2 August 2025 (except for models that have been placed on the market before then, which have until 2 August 2027 to comply). The exact requirements vary depending on whether the GPAI model is provided through a closed or open licence. The Act also imposes additional, stringent requirements on a limited universe of “systemic” AI systems, a category which is delineated by reference to the magnitude of computing power used for the model.
The GPAI Model Provider Guidance
The Commission’s GPAI model provider guidance elaborates on how key terms within the Act should be interpreted, providing illustrative examples. Primarily addressing clear-cut scenarios, the guidance remains somewhat limited regarding nuanced or complex cases. Nonetheless, there are a few areas where Commission provides some helpful additional colour. For example:
- Key Definitions: The guidance contains additional information on key definitions, including “GPAI model”, “GPAI model with systemic risk”, “placing [AI models] on the market”, and the scope of the (limited) exemption for models released under “free and open-source licences”.
- Downstream Modifier: Downstream users making “substantial modifications” to GPAI models assume provider obligations over those modified models. The Commission introduces an indicative criterion whereby modifications involving training compute exceeding one-third of the original model’s training compute indicate substantial modification. If the original compute value is unknown, thresholds of one-third of those for GPAI models (currently 10^23 FLOP) or GPAI models with systemic risk (currently 10^25 FLOP) apply.
- Additional Compliance Period: For GPAI models placed on the market before 2 August 2025, providers have until 2 August 2027 to comply fully with GPAI requirements. Acknowledging practical difficulties in retrospectively enforcing compliance, the guidance explicitly states that providers are not required to retrain or unlearn models if such actions are impractical, disproportionately burdensome, or if relevant training data information is unavailable.
The Codes of Practice
The Codes of Practice are voluntary measures intended to assist entities in operationalising the GPAI model requirements in Chapter V of the EU AI Act. The drafts will now be reviewed by Member States and the Commission in the coming weeks.
Like many global regulators, the EU recognises the delicate balance between fostering economic and societal benefits from AI and mitigating associated risks. As such, the Act was deliberately published with limited details on the GPAI requirements (they account for merely 9 of the Act’s 144 pages) to give the EU time to continue refining its regulatory approach ahead of 2 August 2025. The Codes are therefore intended to clarify the regulatory expectations for these models; importantly, compliance with these Codes does not conclusively demonstrate compliance with the Act, although adherence is likely to create a rebuttable presumption of compliance.
1. Transparency Code: Annex XII of the Act lists the information that GPAI model providers must disclose to downstream users of the model. The Transparency Code essentially provides a template table of these information categories; it does not provide additional details on the types of information, or the level of detail, required in the table. The Code also flags that:
- the completed table and any other requested information should be provided to relevant EU authorities upon request;
- downstream users of the model should receive relevant information from the table, subject to confidentiality requirements; and
- the model provider should publish the name of a contact person on its website.
2. Copyright Code: Article 53(1)(c) of the Act requires GPAI model providers to implement a policy ensuring compliance with Union copyright and related rights. Although the Copyright Code was expected to clarify this requirement, it remains notably lightweight – unsurprising given the contentious nature of copyright issues surrounding AI, with many related cases currently before the courts. Instead, the Code instead emphasizes clear compliance areas, such as:
- Ensuring web-crawlers used by providers for data mining or model training do not circumvent website technical measures protecting data from unlawful access, including subscription or paywall protections, and adhere to robots.txt instructions.
- Excluding from crawling any websites recognised by courts or public authorities as persistently infringing copyright on a commercial scale.
- Implementing appropriate safeguards to prevent AI models from generating outputs that merely reproduce copyrighted material.
3. Safety & Security Code: The Safety and Security Code provides detailed guidance for GPAI model providers to systematically manage and mitigate systemic risks, elaborating on the requirements in Article 55 of the Act.
- Providers are expected to adopt a comprehensive Safety and Security Framework that involves creating, implementing, and regularly updating processes for assessing systemic risks throughout the model lifecycle. This includes clearly defined responsibilities for managing these risks, conducting rigorous model evaluations, and implementing appropriate safety and security mitigations.
- The Code emphasises ongoing assessment and mitigation efforts, collaboration with stakeholders, and reporting obligations, notably requiring providers to document and communicate serious incidents to relevant authorities.
- Importantly, the Code contains detailed guidance on identifying systemic risks – including what constitutes systemic risk – and processes providers may want to implement in order to determine what constitutes an acceptable level of risk.
Key Takeaways
Despite ongoing uncertainty about whether aspects of the Act may be revised in the Commission’s anticipated omnibus package of digital simplification rules (expected in late 2025), the published guidance and Codes clearly indicate the EU’s current intention to continue implementing and enforcing the Act according to its original schedule.
However, the concise nature of the documents highlights that there are many unresolved questions around the EU’s approach to these models, particularly regarding copyright restrictions. The guidance and Codes also leave open questions on specific applications, such as open-weight releases of existing models.
Consequently, the guidance and Codes are more useful as indicators of the EU’s intended direction rather than offering comprehensive or immediately actionable clarity. It is clear that the practical interpretation and enforcement of the Act’s requirements will continue to evolve significantly over the coming months and years.
To subscribe to the Data Blog, please click here.
The Debevoise STAAR (Suite of Tools for Assessing AI Risk) is a monthly subscription service that provides Debevoise clients with an online suite of tools to help them fast-track their AI adoption. Please contact us at STAARinfo@debevoise.com for more information.
The cover art used in this blog post was generated by ChatGPT.