The Council of the EU and the European Parliament have reached an agreement on delays and amendments to the EU AI Act, as part of the EU’s ongoing digital simplification programme. Most notably, compliance obligations for stand-alone high-risk AI systems will now come into effect on 2 December 2027 rather than August 2026, as originally planned.

Despite the attention surrounding the proposal, the agreed amendments are relatively modest. For most businesses, the main practical effect will be additional time to prepare for the Act’s high-risk and transparency obligations, rather than a material change to substantive obligations. For background, see our summary of the EU AI Act’s key requirements, and our overview of the first wave and second wave of provisions that are already in force.

While the Council and Parliament still need to complete the formal adoption process before the amendments become law (which is expected to happen in the coming weeks), the key changes agreed include:

  • High-Risk AI Systems: There are several changes.

The obligations for most stand-alone high-risk AI systems – including systems used in employment and recruitment, biometrics, critical infrastructure, consumer credit, and life and health insurance – will now come into effect on 2 December 2027.

High-risk AI systems embedded in certain regulated products – including medical devices, personal protective equipment, vehicles and watercrafts – will be subject to the requirements from 2 August 2028.

A new targeted exemption will apply to high-risk AI systems in certain machinery products, where the EU AI Act requirements overlap with existing sector-specific health and safety requirements.

The agreement reinstates the requirement to register systems in the EU database where a provider considers that an AI system falls within a high-risk category but qualifies for an exemption from the high-risk requirements.

  • Transparency requirements: Providers of AI systems that generate synthetic content will have until 2 December 2026 to ensure outputs are marked in a machine-readable format and detectable as AI-generated or manipulated.
  • Sexually explicit deepfakes and CSAM: A new prohibited AI practice will cover systems that generate non-consensual sexual or intimate content, or child sexual abuse material. This prohibition will apply from 2 December 2026.
  • Bias detection: Businesses may process special category personal data where strictly necessary to detect and correct bias in AI systems, provided appropriate safeguards are implemented. This should help reduce tension between the EU AI Act’s bias-monitoring expectations and the GDPR.
  • AI literacy: The AI literacy obligation remains, but appears to have been softened. The focus is now on businesses taking steps to support AI literacy, with assistance from the Commission and Member States.
  • SMEs and small mid-caps: Some exemptions and reduced obligations for SMEs will be extended to small mid-cap companies, including simplified quality management system requirements and reduced fees and fines.

What should businesses do now?

For most businesses, the amendments primarily buy time. The omnibus proposal is not a major reset of the EU AI Act. It is better understood as a targeted simplification package that gives businesses longer to prepare, while also giving EU institutions more time to produce the guidance and standards needed for meaningful compliance.

The delay to the high-risk AI system requirements is particularly important as many businesses have been waiting for further guidance and standards before they can, in practice, build EU AI Act-compliant governance frameworks.

In the meantime, businesses may want to:

  1. update AI policies and risk assessment procedures to capture the new prohibited category for sexually explicit deepfakes and CSAM, where relevant; and
  2. refresh EU AI Act implementation trackers to reflect the new deadlines for high-risk and transparency obligations.

****

To subscribe to the Data Blog, please click here.

The cover art used in this blog post was generated by Gemini 3 Pro.

The Debevoise STAAR (Suite of Tools for Assessing AI Risk) is a monthly subscription service that provides Debevoise clients with an online suite of tools to help them responsibly fast-track their AI adoption. Please contact us at STAARinfo@debevoise.com for more information.

 

Author

Avi Gesser is Co-Chair of the Debevoise Data Strategy & Security Group. His practice focuses on advising major companies on a wide range of cybersecurity, privacy and artificial intelligence matters. He can be reached at agesser@debevoise.com.

Author

Robert Maddox is a partner in Debevoise & Plimpton LLP’s Data Strategy & Security practice, based in London. In 2021 he was named to Global Data Review’s “40 Under 40” and is described as “a rising star” in cyber law by The Legal 500 US (2022). His practice focuses on cybersecurity incident preparation and response, internal investigations and regulatory defence. Mr. Maddox also advises on data strategy and compliance in the context of emerging technologies, including AI, and operational resilience matters. He can be reached at rmaddox@debevoise.com.

Author

Martha Hirst is an associate in Debevoise's Litigation Department based in the London office. She is a member of the firm’s White Collar & Regulatory Defense Group, and the Data Strategy & Security practice. She can be reached at mhirst@debevoise.com.