The EU AI Act (the “Act”) has made it through the EU’s legislative process and has passed into law today; it will come into effect on 1 August 2024. Most of the substantive requirements will come into force two years later, from 1 August 2026, with the main exception being “Prohibited” AI systems, which will be banned from 1 February 2025.

Despite initial expectations of a sweeping and all-encompassing regulation, the final version of the Act reveals a narrower scope than some initially anticipated.

  • What we know about the Act: As expected, the Act’s text has not materially changed since the final draft was released in February 2024. It defines “AI” in broad terms and has a wide territorial scope. However, the Act only imposes regulatory requirements on AI systems that fall within four risk-based categories of AI: (1) Prohibited Risk AI systems; (2) High Risk AI systems; (3) AI systems that trigger transparency obligations; and (4) general purpose AI systems, including those presenting “systemic risk” (“GPAI”). The most onerous requirements apply to GPAI developers, and to a limited number of Prohibited and High Risk AI use cases. Consequently, outside of a small number of sectors – specifically generative AI model development, life and health insurance, consumer lending, hiring and employment, law enforcement and defence contractors, and education, whose core business operations are the subject of targeted regulation in the Act — it seems possible that the EU AI Act will not have an immediate material impact on the AI plans and governance strategies of many companies.
  • What is still unclear: The Act only contains high-level descriptions of the requirements that will be imposed on AI systems that fall within its four risk categories. Many of the Act’s substantive mechanics and requirements will be fleshed out in various pieces of secondary legislation and additional guidance over the next 24 months. Consequently, as discussed in our previous blog post, for many of the obligations, there is currently not enough detail on the scope and content of the specific requirements for businesses to have a programme that can reasonably assure compliance with the Act. Trying to fully build out such a programme now could result in wasted resources and lost opportunities, and some businesses may find that they will spend significantly more as first movers than they would as timely followers.

Despite the remaining uncertainty, because the Act does cover certain uses of AI in the workplace that are likely to be widespread, and given the time it can take to develop an effective controls framework that meshes with existing policies and procedures, businesses should consider establishing a governance framework to ensure they are able to identify and appropriately govern covered uses of AI. This includes identifying whether any current or planned AI systems that are covered by the scope of the Act involve “prohibited” AI practices defined by the Act, and (if so) implement a plan for ending them within the next six months.

This blog post provides further information on the final content of the EU AI Act, and steps businesses may want to consider taking now to prepare for compliance. We will be publishing further blog posts that analyse into certain key aspects of the Act in due course.

What AI Systems are covered by the Act?

The EU AI Act casts a wide net in defining the scope of AI systems that may be subject to the Act.

AI Definition & Exceptions

The Act adopts the OECD’s definition of AI:

A machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

However, certain exemptions apply. Most notably, the Act does not apply to AI systems dedicated solely to scientific research and development, or to any research, testing and development of AI systems or models prior to being placed on the market or into service.

Territorial Scope

The Act has notably wide extraterritorial reach. In addition to AI providers, deployers, importers and distributors that are established or operate in the Union, the Act also covers AI providers and deployers, regardless of where they are established, provided their AI systems affect users within the EU or the output of the AI system is used within the EU. Consequently, on a plain reading, the Act appears to apply significantly more broadly than other pieces of EU legislation, such as the GDPR.

Material Scope

The Act’s broad territorial reach is tempered by a narrower material scope. For AI systems caught by the Act, they will only be subject to restrictions if they fall into one of its four categories: (1) Prohibited Risk AI systems, (2) High Risk AI systems; (3) AI systems that trigger transparency obligations; and (4) general purpose AI systems, including those presenting “systemic risk”.

Prohibited Risk: A very narrow list of AI systems will be banned.

The EU AI Act contains a narrow list of AI systems that will be banned within the EU from 1 February 2025. Excepting certain law enforcement-specific AI systems, the prohibited practices include:

  • AI systems that use biometrics data for either (a) emotion recognition in the workplace or (b) categorizing individuals as members of enumerated protected classes.
  • AI systems that distort behaviour through either (a) subliminal, manipulative, deceptive techniques, or (b) exploitation of vulnerabilities due to a person’s age, disability, or social or economic situation.
  • AI systems that create social scores that lead to unrelated or unjustified/disproportionate detrimental or unfavourable treatment.
  • AI systems that assess or predict criminal conduct, based solely on profiling or the person’s personality traits or characteristics.
  • AI systems that conduct untargeted data scraping of the internet or CCTV for the purposes of expanding facial recognition databases.

Violations of these prohibitions will likely be early enforcement priorities for regulators.  Therefore, notwithstanding their potentially limited application to most private entities, businesses should ensure that the Prohibited Risk categories are included within any AI risk assessment processes, so that any such AI systems that are within the scope of the Act are identified early on in the AI procurement or development process and removed.

High Risk: A small number of uses will be subject to additional compliance requirements.

What are the High Risk AI Systems?

There are two categories of High Risk AI systems:

  • EU Harmonisation Legislation: Certain AI systems that are in themselves, or are used as a safety feature in, a product subject to certain EU harmonisation legislation, are automatically considered to be High Risk. This is a very specific list that covers a range of different products including machinery, lifts, radio equipment, watercrafts, aircrafts, motor vehicles, medical devices and personal protective equipment.
  • Significant Risk of Harm to Individuals: The EU AI Act also sets out an exhaustive list of other AI systems that are automatically considered to be High Risk unless otherwise demonstrated (see below). It is worth noting that although the Act refers to them as AI systems, these categories are defined by reference to both the capabilities of the specific AI tool in question and its intended use-case (e., it is not a system-specific prohibition).
  • Consumer Credit: AI systems used to assess the individuals’ creditworthiness or establish their credit scores (except for detecting financial fraud).
  • Underwriting: AI systems used for the risk assessment and pricing of individuals’ life or health insurance.
  • Education & Vocational Training: AI systems used to determine access or admission to education, evaluate education levels, or evaluate learning outcomes.
  • Employment: AI systems used for recruitment, promotions, terminations, evaluations and task allocations.
  • Biometrics: AI systems used for very specific types of remote biometric identification, biometric categorization, or emotion recognition outside of the workplace.
  • Critical Infrastructure: AI systems used as safety components in the management and operation of critical digital infrastructure, road traffic, and the supply of water, gas, heating and electricity.
  • Public Authorities: Various AI systems used by or in connection with law enforcement (in the detection, investigation or prosecution of criminal offences), migration, asylum and border control management, the administration of justice and democratic processes, and assessing eligibility for public assistance benefits and services.

The potential applicability of the High Risk AI systems, and the impact of their related compliance requirements, is therefore going to vary significantly depending on the industry businesses operate within. However, it is worth noting that the EU Commission has the power, and a significant amount of discretion, to extend the list in the future on a prudential basis, so the impacts could evolve.

Are there any exceptions?

However, for all these types of AI systems the High Risk obligations will not apply if the AI system does not undertake profiling of individuals, and is otherwise assessed as not posing a high risk to individuals in practice. This may be met where the AI system is intended:

  • To perform a narrow procedural task;
  • To improve the result of a previously completed human activity;
  • To compliment a previously completed human decision, rather than replacing or influencing the previous human decision without proper human review; or
  • To perform only a preparatory task for an otherwise high-risk use case.

However, even if any of these exemptions apply, while the High Risk requirements will not apply, the AI system will still have to be registered in a separate EU database.

What are the requirements for High Risk AI?

The EU AI Act contains a long list of additional governance, compliance and documentation requirements that High Risk AI systems will have to comply with, such as risk management systems, data governance mechanisms, serious incident recordkeeping, conformity assessments, and human oversight requirements. Some of these requirements will apply to the developer, while others will apply downstream to distributors, importers and deployers of the AI systems respectively.

However, the Act contains only very high-level descriptions of what these requirements will include in practice, and for many obligations, there is currently insufficient information on the scope and content of these requirements to be able to take any meaningful steps towards compliance. Instead, the details will be fleshed out in several pieces of secondary legislation and subsequent guidance over the next 24 months. Businesses with High Risk AI systems therefore will need to monitor for this additional information before they can start tailoring their AI compliance programmes to meet the Act’s requirements.

Certain AI Systems Have Transparency Obligations, Even if not High Risk

The EU AI Act imposes transparency obligations on certain AI systems that interact directly with individuals, or generate content that could give rise to risks of impersonation or deception. In particular:

  • Providers of AI systems that are intended to directly interact with individuals must be designed and developed in a way so that individuals are informed that they are interacting with an AI system (unless obvious).
  • Providers of certain AI systems generating synthetic audio, image, video or text content shall ensure that the outputs are marked in a machine-readable format and detectable as artificially generated or manipulated.
  • Deployers of emotional recognition or biometric characterization systems shall provide notice to the affected individuals, and process any personal data captured in accordance with the GDPR.
  • Deployers of certain AI systems generating synthetic audio, image or video content shall disclose that the content has been artificially generated or manipulated.
  • Deployers of certain AI systems generating or manipulating text content that is published to inform the public on matters of public interest shall disclose that the text has been artificially generated or manipulated.

As with High Risk requirements, further details will be fleshed out in future codes of practice and guidance documentation over the coming months.

Additional Requirements for General Purpose AI Systems (“GPAI”)

Finally, the EU AI Act contains additional restrictions for GPAI, or AI systems that are designed to competently perform a wide range of intelligence tasks, think abstractly and adapt to new situations. The exact requirements vary depending on whether the GPAI is provided through a closed or open licence, but generally GPAI providers will be required to:

  • Produce certain technical documentation, including training and testing process and evaluation results;
  • Provide certain information and documentation to downstream providers that intend to integrate the GPAI model into their own AI system;
  • Establish a compliance policy for the Copyright Directive; and
  • Publish certain information on the GPAI model’s training data.

The Act also imposes stringent requirements on a limited universe of “systemic” AI systems, a category which is delineated by reference to the magnitude of computing power used for the model. Based on our current understanding of this test, it seems likely that very few, if any, current GPAIs will meet these thresholds. Nonetheless, the obligations for providers of such models include: performing model evaluation and adversarial testing, assessing and mitigating possible systemic risks, tracking, documenting and reporting serious incidents, and ensuring adequate cybersecurity protections. The AI Office will publish Codes of Conduct containing further details.

How to Prepare.

While many of the details of the EU AI Act’s requirements will be fleshed out in secondary legislation and guidance over the next 24 months, given the increasing pressures on businesses to act now, and the time it can take to develop and implement effective governance frameworks, there are steps that businesses may want to consider taking to prepare.

At this point, there may be limited utility in trying to determine that your business’s use of AI falls outside the territorial scope of the Act. For most companies, this analysis is likely to change over the next several months as their use of AI continues to evolve. Moreover, additional guidance is likely to be published on how the Act’s territorial scope is intended to apply in practice. Instead, businesses may want to first focus on whether their AI systems fall within the material scope of the Act, and then return to the territorial scope analysis for any potentially-covered AI systems as needed.

Consequently, as an initial step, businesses should start considering whether any current or planned AI systems fall into any of the four risk categories identified above, document this assessment, and create an inventory. As a practical matter, documentation may be important in some cases to show which AI systems are – and are not – covered by the Act’s scope.

Given the relatively short timeframe before the Prohibited Risk restrictions come into force, businesses should initially focus on identifying any “prohibited” AI practices, and if any are found, implement a plan for ending them. As these prohibited practices are likely to be considered high risk in almost any regulated jurisdiction, it may be worth considering phasing out these uses of AI irrespective of whether businesses ultimately determine whether they fall within the territorial scope of the Act.

Businesses may then want to consider whether their AI systems fall into any of the other risk categories and, if so, consider how to approach governance and compliance oversight of those AI systems.

Finally, businesses that have not done so already, should spend time now working on developing a controls framework for AI that focuses on managing operational risk and that prioritizes safe, secure, and high-value uses of AI.  This could include a system for identifying existing and proposed uses of AI, assessing the risks of those uses, and having a process for documenting approvals and risk-accepting any uses (or categories of uses) that go into production. The goal should be to have a controls framework that works well for the particular business and its existing policies and procedures, and not just one that, on paper meets expected future regulatory requirements, which can take time to get right.

****

The authors would like to thank Debevoise Summer Law Clerk Gonzalo Nuñez for his work on this Debevoise Data Blog.

 

To subscribe to the Data Blog, please click here.

The cover art used in this blog post was generated by DALL-E.

Author

Avi Gesser is Co-Chair of the Debevoise Data Strategy & Security Group. His practice focuses on advising major companies on a wide range of cybersecurity, privacy and artificial intelligence matters. He can be reached at agesser@debevoise.com.

Author

Matthew Kelly is a litigation counsel based in the firm’s New York office and a member of the Data Strategy & Security Group. His practice focuses on advising the firm’s growing number of clients on matters related to AI governance, compliance and risk management, and on data privacy. He can be reached at makelly@debevoise.com

Author

Robert Maddox is International Counsel and a member of Debevoise & Plimpton LLP’s Data Strategy & Security practice and White Collar & Regulatory Defense Group in London. His work focuses on cybersecurity incident preparation and response, data protection and strategy, internal investigations, compliance reviews, and regulatory defense. In 2021, Robert was named to Global Data Review’s “40 Under 40”. He is described as “a rising star” in cyber law by The Legal 500 US (2022). He can be reached at rmaddox@debevoise.com.

Author

Martha Hirst is an associate in Debevoise's Litigation Department based in the London office. She is a member of the firm’s White Collar & Regulatory Defense Group, and the Data Strategy & Security practice. She can be reached at mhirst@debevoise.com.