When drafting policies on the use of artificial intelligence, one challenge that many businesses face is how to define AI, and relatedly, when should AI governance and compliance programs apply to models that do not meet the definition of AI.

Choosing a Regulatory Definition of AI

One common approach is to adopt the definition that is used in a regulation or in official government guidance that applies to the company’s use of AI, such as the Biden Executive Order, which defines “Artificial Intelligence” or “AI” as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.”

The advantage of this approach is that it usually insulates the company from regulatory scrutiny over the choice of wording for their AI definition. But official definitions of AI are often the result of agency compromises, which can lead to ambiguities. For example, it is not entirely clear if complex algorithms that make predictions about who is likely to repay a loan, but do not utilize machine learning, would be covered by the above definition.

Risks of Using Ambiguous Regulatory AI Definitions

Uncertainties as to what is covered by a particular definition of AI may benefit regulators who wish to maintain flexibility as to the scope of certain obligations, but they can create significant compliance problems for companies trying to set out clear rules for responsible AI use in their organizations.

If an ambiguous definition of AI is interpreted too narrowly, such that algorithms that do not use machine learning are scoped out, some higher-risk models that are subject to regulatory obligations may not receive appropriate compliance scrutiny because they are mistakenly viewed as outside the policy, leading to increased legal, operational, and reputational risks.

By contrast, if an ambiguous definition of AI is interpreted too expansively, some lower-risk models that are not covered by regulatory obligations may be subject to unnecessary compliance burdens because they were mistakenly viewed as covered, which can slow innovation. This can also lead to higher-risk models not getting appropriate scrutiny because resources are misallocated to lower-risk models, again resulting in legal, operational, and reputational risks.

One additional consideration in scoping an AI definition broadly is that labeling certain models as “AI” for compliance purposes, when they are not really AI, may carry some AI washing risk, as both the SEC and the FTC have warned companies not to characterize models as AI that are not really AI.

Other Options for Defining AI

Some alternative approaches that companies have adopted to address these challenges include:

  • Adding Examples to Official Definitions – Some firms use official definitions of AI for their internal policies but then also list several examples of models that are (and are not) covered by the definition in the policy, periodically updating those examples based on edge cases that arise and are resolved.
  • Using Clear Simple Definitions – Some businesses have rejected using official definitions and instead are adopting very simple and clear definitions of AI such as “any use of generative AI or machine learning.” These definitions are also usually followed by examples of what models are and are not covered.
  • Scoping the Policies to Match Regulatory Obligations – Another approach is to match policies directly to regulatory obligations, explicitly recognizing that the policy reaches beyond AI. For example, rather than expanding the definition of AI to cover high-risk models that do not involve machine learning, some AI policies explicitly state that the obligations in the policy apply to models that are subject to automated decision-making regulations, even if those models do not meet the definition of AI. This is common in the life insurance industry where firms are expanding the scope of their AI policies to cover the use of external data with underwriting models, even if no machine learning is involved, because of the increasing regulatory scrutiny of those algorithms. The same holds true for hiring algorithms that are used to screen resumes but do not involve generative AI or machine learning.
  • Separating Generative AI from Traditional AI – One additional approach is to have a separate policy that covers generative AI because of the risks associated with the use of those models (e.g., hallucinations, poor quality control, drift, loss of IP rights, etc.) that may not be applicable to certain use cases involving traditional AI models.

Tips for Deciding How to Define AI

In light of these developments, companies that are drafting AI definitions for internal policies or procedures should consider the following:

  • Is the current definition of AI clear as to which models being used (or are likely to be used in the near future) are covered?
  • Are there high-risk models that are subject to regulatory obligations that are not covered by the AI compliance program? If so, are the risks associated with those models adequately assessed and reduced by another governance and compliance structure in the company?  If not, should those models be covered by the AI program, or should some other compliance program be created for them?

The cover art used in this blog post was generated by DALL-E.

To subscribe to the Data Blog, please click here.

The Debevoise AI Regulatory Tracker (DART) is now available for clients to help them quickly assess and comply with their current and anticipated AI-related legal obligations, including municipal, state, federal, and international requirements.

Author

Avi Gesser is Co-Chair of the Debevoise Data Strategy & Security Group. His practice focuses on advising major companies on a wide range of cybersecurity, privacy and artificial intelligence matters. He can be reached at agesser@debevoise.com.

Author

Matthew Kelly is a litigation counsel based in the firm’s New York office and a member of the Data Strategy & Security Group. His practice focuses on advising the firm’s growing number of clients on matters related to AI governance, compliance and risk management, and on data privacy. He can be reached at makelly@debevoise.com

Author

Melyssa Eigen is an associate in the Litigation Department. She can be reached at meigen@debevoise.com.

Author

Ned Terrace is an associate in the Litigation Department. He can be reached at jkterrac@debevoise.com.