Despite much fanfare, and a process that seems to edge ever nearer to completion, the EU AI Act still has not been formally adopted.

The Act still has to undergo a final European Council vote before it can be published in the Official Journal, 20 days after which it will be finally adopted; this is widely expected to occur sometime in Q2-2024. However, the key effective dates for the Act (distanced generally in 6- to 12-month increments) will not be known with precision until it is published in the Official Journal.  Even then, many of the Act’s substantive mechanics and requirements will remain uncertain: as is often the case with US legislation, core elements of the Act are not set out in the law itself, and instead are delegated for later determination and implementation by authorized bodies.

So, what should businesses do while we wait?

There is a tremendous amount of pressure on legal and compliance departments to act now, driven in part by their experience with the multi-year implementation process that many businesses required for the GDPR.  But the scope of EU AI is meaningfully different, and may end up having much less of an impact on many businesses than the GDPR did.  As such, there is a real risk that trying now to anticipate and satisfy the full measure of the Act’s eventual requirements will result in wasted resources and lost opportunities. Businesses may find that they will spend significantly more as first movers than they would as timely followers.

Similarly, at this point, there may be limited utility in trying to determine whether your business’ use of AI even falls within the territorial scope of the Act, which is an analysis that is likely to change over the next several months as businesses’ use of AI continues to evolve.

With that in mind, and for the time being, it is worth all businesses considering whether to prioritize two key actions.

First, consider whether any current or planned AI systems involve “prohibited” AI practices defined by the Act, and (if so) implement a plan for ending them.  Excepting certain law enforcement-specific AI systems, the prohibited practices include:

  • AI Systems that use biometrics data for either (a) emotion recognition in the workplace or (b) categorizing individuals as members of enumerated protected classes.
  • AI Systems that distort behaviour through either (a) subliminal, manipulative, deceptive techniques, or (b) exploitation of vulnerabilities due to a person’s age, disability, or social or economic situation.
  • AI Systems that create social scores that lead to unrelated or unjustified/disproportionate detrimental or unfavourable treatment.
  • AI Systems that assess or predict criminal conduct, based solely on profiling or the person’s personality traits or characteristics.
  • AI Systems that conduct untargeted data scraping of the internet or CCTV for the purposes of expanding facial recognition databases.

These practices will be banned outright six months from the EU AI Act’s formal adoption (i.e., within FY-2024) and violations of these prohibitions will likely be early and easy enforcement priorities for regulators.  And, as importantly, these prohibited practices are likely to be high risk in almost any regulated jurisdiction, meaning it may be worth considering ending these uses of AI irrespective of whether businesses ultimately determine whether they fall within the scope of the EU AI Act.

Second, businesses that have not done so already should spend time now working on developing a controls framework for AI that focuses on managing operational risk and that prioritizes safe, secure, and high-value uses of AI.  This should include a system for identifying uses of AI, assessing the risks of those uses, and having a process for documenting approvals and risk-accepting any uses that go into production. The goal should be to have a controls framework that works for your business and your existing policies and procedures, and not just one that meets expected future regulatory requirements; this can take time to get right. Establishing this framework early will help you navigate the evolving landscape of AI regulation effectively.


To subscribe to the Data Blog, please click here.

The cover art used in this blog post was generated by DALL-E.


Avi Gesser is Co-Chair of the Debevoise Data Strategy & Security Group. His practice focuses on advising major companies on a wide range of cybersecurity, privacy and artificial intelligence matters. He can be reached at


Matthew Kelly is a litigation counsel based in the firm’s New York office and a member of the Data Strategy & Security Group. His practice focuses on advising the firm’s growing number of clients on matters related to AI governance, compliance and risk management, and on data privacy. He can be reached at


Martha Hirst is an associate in Debevoise's Litigation Department based in the London office. She is a member of the firm’s White Collar & Regulatory Defense Group, and the Data Strategy & Security practice. She can be reached at