On 2 August 2025, the second wave of requirements under the EU AI Act (the “Act”) entered into force, following the first implementation phase six months ago. This latest set of requirements primarily covers General Purpose AI (“GPAI”) model providers, as well as containing operational requirements for the EU and Member State oversight and enforcement bodies.
While these new obligations are unlikely to be directly relevant for most clients, they represent significant milestones in shaping the EU’s evolving interpretation of and approach to the Act. Here are five key things for businesses to know.
1. The GPAI model rules are in force. GPAI models are AI models that display significant generality, are capable of competently performing a wide range of distinct tasks, and can be integrated into a variety of downstream systems or applications. The GPAI model requirements – set out in Chapter V – are now in force for models “placed on the market” on or after 2 August 2025. For models placed on the market before then, developers have until 2 August 2027 to comply.
The exact requirements vary depending on whether the GPAI model is provided through a closed or open licence, but broadly the Act requires GPAI model developers to: (1) prepare technical documentation, including on the model’s training data, to be shared with EU regulators on request; (2) prepare certain information and documentation about the model for downstream users who may use the model into AI systems; (3) implement an AI copyright policy; and (4) prepare a public summary of the model’s training data. The Act also imposes additional, stringent requirements on a limited universe of “systemic” AI systems, a category which is delineated by reference to the magnitude of computing power used for the model.
The EU recently released GPAI model guidance and Codes of Practice detailing how these rules should be interpreted and applied. See our blog post for more information.
2. But the Commission’s enforcement powers do not apply until 2 August 2026. The European Commission’s GPAI model enforcement powers under the EU AI Act do not come into force until 2 August 2026. These include the powers to request information about or conduct evaluations of GPAI models, require GPAI model providers to take certain measures to comply with the Act, restrict or remove the GPAI model from the EU market, and impose fines of up to the higher of 3% of global annual turnover or EUR 15 million.
3. EU & Member State regulators should be operational, but many lag behind schedule. Several regulatory oversight and enforcement bodies are now required to be fully operational. At the EU level, these include the AI Office, the European Artificial Intelligence Board, and the Scientific Panel of Experts. Additionally, Member States are required to have designated their National Competent Authorities and establish the various bodies that will be involved in administering overseeing the “high-risk AI system” conformity assessments from 2 August 2026. While substantial progress has been achieved at the EU level in establishing and operationalising these oversight bodies, Member States have generally lagged behind. This delay further compounds the concerns over the potential discrepancies in the Act’s application between different Member States.
4. For most businesses, the key effective date is for the High-Risk requirements on 2 August 2026. For most businesses, their main source of exposure to the Act likely will be through the “high-risk” requirements (to the extent applicable), which come into force on 2 August 2026. However, substantial uncertainties remain regarding the precise scope of the categories of “high-risk” AI systems, as well as the content of the associated compliance obligations. Detailed guidance was expected well ahead of this deadline. But, while the Commission has launched a public consultation on these rules, the EU is already experiencing delays, and recent indications suggest that official guidance may only become available after the compliance requirements have come into force. This raises questions about the practical compliance implications for businesses, particularly if significant operational adjustments are necessary to achieve full compliance with the Act.
5. But the Act may change in the interim. Amendments to the EU AI Act are anticipated as part of the European Commission’s digital simplification package proposal, expected by the end of this year. Although it seems clear that adjustments will be made, the exact scope and extent of any changes remain uncertain – including whether the amendments will entail minor modifications to better align the Act with existing EU legislation, whether they will contain adjustments specifically aimed at easing compliance obligations for SMEs, or whether they will involve more extensive revisions.
Given this uncertainty, businesses with high-risk AI systems may consider continuing to map their potential EU AI Act touchpoints, while holding off on making any substantial governance changes for those systems, pending further clarity over the direction of travel. Premature action in this evolving regulatory landscape could lead to unnecessary expenditures of time and resources, particularly if significant changes to the Act materialise.
To subscribe to the Data Blog, please click here.
The Debevoise STAAR (Suite of Tools for Assessing AI Risk) is a monthly subscription service that provides Debevoise clients with an online suite of tools to help them fast-track their AI adoption. Please contact us at STAARinfo@debevoise.com for more information.
The cover art used in this blog post was generated by ChatGPT.