The first wave of the EU AI Act’s requirements came into force on 2 February 2025, namely:

  • Prohibited AI: the ban on the use and distribution of prohibited AI systems, and
  • AI Literacy: the requirement to ensure staff using and operating AI possess sufficient AI literacy.

All businesses caught by the EU AI Act’s jurisdictional scope – which is potentially very broad and may even exceed the scope of the GDPR – are now required to comply with these obligations.

As we have previously discussed, the EU AI Act (the “Act”) is the EU’s flagship piece of AI regulation and self-proclaimed world’s first comprehensive AI law. It adopts a risk-based approach to AI regulation, imposing new regulatory requirements on AI systems that fall within four risk-based categories: (i) “prohibited” or “unacceptable risk” systems; (ii) “high risk” systems; (iii) systems that trigger transparency obligations; and (iv) general purpose AI systems. While the Act officially entered into force on 2 August 2024, the provisions come into force over a 36-month period, starting with the ban on “unacceptable risk” AI systems and the AI literacy requirements, which took effect on 2 February 2025.

This blog post discusses the requirements that are now in force, and what they mean for distributors (or “providers”) and users (or “deployers”) of AI systems.

Prohibited AI

Which AI Systems are Banned: Under the Act, the following practices are considered to present an “unacceptable risk” to individuals’ rights and freedoms, and are therefore prohibited:

  1. AI systems that deploy subliminal, manipulative and deceptive techniques with the objective or effect of distorting the behavior of a person of group of people;
  2. AI systems that manipulate individuals’ behaviour or exploit their vulnerabilities;
  3. AI systems that use biometric categorisation to infer certain sensitive categories (such as race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation);
  4. AI systems that infer emotions of people in the workplace based on their biometric data (except for medical or safety reasons);
  5. AI systems that evaluate or classify individuals based on their social behavior or personality traits, leading to unfair or detrimental treatment;
  6. AI systems that involve the scraping of internet or CCTV material to develop or expand facial recognition databases; and
  7. AI systems that involve risk assessment and prediction based solely on profiling or personality traits (except where the output is used to support a human-based assessment).

Additionally, there are specific rules and exceptions which apply to law enforcement’s use of AI.

Notably, while the scope of some of the prohibitions are relatively self-evident, which use cases may be covered by some of these prohibitions is unclear (e.g., #1 and #2). The EU Commission has since published its (non-binding) guidance on the prohibitions, though it is likely that the exact scope of these requirements will be refined by the practical implementation of the Act over the coming years. Nonetheless, we assume that the following categories of use cases will at least receive significant scrutiny:

  • using AI to generate and insert barely perceptible images or sounds into audio or video content to influence people’s purchasing decisions;
  • using AI in job interviews to analyze individuals’ facial expressions or body language to infer characteristics like trustworthiness or leadership; and
  • using AI to monitor customer service agents’ vocal tone, rather than the content of the audio, to evaluate their performance when handling customer complaints.

Who has to comply: The Act’s ban has a broad territorial scope – it applies to all providers, deployers, importers and distributors of AI that are established or operate in the EU, as well as any other providers/deployers (regardless of their place of establishment) whose AI systems affect users in the EU or where the output of the AI system would be used in the EU.

Penalties: In future, breaching this prohibition could result in administrative fines of up to €35m or up to 7% of total worldwide annual turnover for the preceding financial year (whichever is higher). However, national competent authorities will not be empowered to impose such penalties until 2 August 2025, i.e., in six months’ time.

AI Literacy

What is the requirement: Providers and deployers of AI systems must ensure, “to their best extent”, a sufficient level of AI literacy amongst staff that will operate and use AI systems within the business. The focus of the requirement appears to be more on ensuring staff can understand and identify AI-related risks (including having a basic understanding of the Act) rather than up-skilling staff to use the technology to its greatest potential, as has been a focus in other jurisdictions such as the UK.

How will it apply in practice: Currently, there is no further information on what is expected in practice, or how this requirement will be applied or enforced. The AI Office is expected to prepare formal guidance and best practices soon, and has published a repository of how certain businesses in the AI Pact initiative are approaching the requirement to assist with industry benchmarking. Nonetheless, it is likely that different levels of training and education will be required for (1) those involved in overseeing the practical development and implementation of AI within the business, (2) those involved in AI governance oversight, and (3) staff that use the technology in connection with their roles. Additional measures likely will be required for those involved in higher-risk AI uses, including where the AI system is being used in connection with vulnerable or at-risk individuals.

How to Respond

Businesses subject to the EU AI Act should review their AI systems, and their uses, to ensure that they are not engaged in any prohibited uses of AI.  They should also train staff members involved in AI development or use on AI risks and responsibilities. Given the current lack of clarity over the scope of these requirements, there are only a few concrete compliance steps that businesses must take now. But companies should monitor for further guidance from the AI Office, which should be published soon, and revisit their AI governance policies and procedures accordingly.

Businesses should be mindful of the following upcoming key developments in the roll-out of the EU AI Act:

  • On 2 August 2025, new obligations on the providers of general-purpose AI systems will take effect.
  • On 2 August 2026, most of the obligations relating to “high-risk” AI systems will take effect, with minor exceptions, as well as the transparency requirements. For more discussion of the content of these obligations, see here.

Finally, the lack of clarity over the EU AI Act’s requirements, combined with the highly fluid and divergent AI regulatory landscapes in other jurisdictions, means that companies should focus much of their AI compliance and governance efforts on reducing the risks associated with their high-value applications of AI, rather than trying to achieve compliance with draft laws or regulations that are years away from being enforceable, and may change significantly between now and then.

 

To subscribe to the Data Blog, please click here.

The cover art used in this blog post was generated by DALL-E.

Author

Avi Gesser is Co-Chair of the Debevoise Data Strategy & Security Group. His practice focuses on advising major companies on a wide range of cybersecurity, privacy and artificial intelligence matters. He can be reached at agesser@debevoise.com.

Author

Matthew Kelly is a litigation counsel based in the firm’s New York office and a member of the Data Strategy & Security Group. His practice focuses on advising the firm’s growing number of clients on matters related to AI governance, compliance and risk management, and on data privacy. He can be reached at makelly@debevoise.com

Author

Martha Hirst is an associate in Debevoise's Litigation Department based in the London office. She is a member of the firm’s White Collar & Regulatory Defense Group, and the Data Strategy & Security practice. She can be reached at mhirst@debevoise.com.

Author

Samuel Thomson is a trainee associate in the Debevoise London office.