When we help clients draft their AI policies, we aim to ensure that employees understand the terms that we use. For example, we often define “AI” simply as generative AI, including the outputs of tools like ChatGPT, Gemini and Claude.
Reviewing and approving a new AI System is hard enough without the added difficulty of not knowing exactly what risks you are assessing. Is it a particular AI tool, the data, the specific use, the users, or some combination of those elements? And official definitions are not helpful. Laws like the Colorado AI Act and the EU AI Act define “AI Systems” in broadly similar terms such as:
a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
Definitions like these are not useful for people who need to make actual decisions about AI in the workplace. So we are sharing the practical definitions that we use when we assist our clients with AI adoption.
Definitions
- Tool is the software delivery method or the AI model type. Examples include Gemini 3.0, Claude 4.5 Opus, Enterprise ChatGPT 5.1 Pro, Westlaw Precision, Harvey, Legora, and Hebbia.
- Data is the information used to train or operate the model, including reference data and runtime inputs. Examples include publicly available data, licensed data like LexisNexis, firm data, employee personal data, client confidential data, and MNPI.
- Use is what we are doing with AI (also referred to as the task, purpose, or function). Examples include resume screening, generating images for the firm’s website, customer service chatbot, final document proofreading, and real-time translation of a Zoom meeting.
- Users are the people who are using the AI Tool. Examples include the in-house legal department, advanced coders, HR professionals, all lawyers at a law firm, all employees, customers with an online account, and the general public.
- Consumers are the people who will have access to the output from the tool, and who may (but do not have to) be the same as the Users. Examples include clients of law firms that use AI for legal work and customers who view AI-generated content from companies through their social media.
- AI System = AI Application = Tool + Data = (1+2). We think this formulation is largely consistent with the more complex definitions for AI System and AI Application provided by NIST, the OECD, and the EU AI Act frameworks.
- Use Case = Tool + Data + Use + Users + Consumers = (1+2+3+4+5)
- Mitigants are measures or controls designed to reduce the risks associated with an AI Use Case that do not significantly reduce the value of the Use Case. We refer to these mitigants collectively as Mitigation. Examples include human review by subject matter experts (SMEs), access controls, logging and monitoring, training, and technical safeguards such as data retention controls, content filters, or prompt restrictions.
- AI Solution = Use Case + Mitigants = (7 +8)
Example: Creating the Cover Art for this Blog Post
- Tool = Gemini 3 Nano Banana Pro.
- Data = an early draft of this blog post + an instruction to create a cover image for the blog post with a retro-sci-fi theme that will blow people’s minds.
- Use or Task = creating cover art for this blog post.
- User = Avi Gesser, who created the cover art image.
- Consumers = you.
- AI System or AI Application = Nano Banana Pro + draft blog post + prompt.
- Use Case = Nano Banana Pro + draft blog post + prompt + task + Avi Gesser + you.
- Mitigation = Achutha Raman and Diane Bernabei, as human-in-the-loop SMEs, reviewed the image closely to make sure that it is clear, accurate, and appropriate, that it does not omit anything important, and that the words have no typos or other errors.
- AI Solution = Nano Banana Pro + draft blog post + prompt + task + Avi Gesser + you + Achutha Raman and Diane Bernabei’s check of the output.
We recognize that these definitions are not perfect, and there are certainly choices we’ve made where reasonable minds may have different views (e.g., whether training data should go in the Data element or the Tool element). This blog post is meant to be a starting point for a dialogue among AI practitioners on workable terms; we are continually refining these and other definitions and welcome any suggestions for improvement. Successful AI adoption is extremely challenging, but having a simple, shared vocabulary for what we are all trying to do should not be.
* * *
To subscribe to the Debevoise Data Blog, please click here.
The Debevoise STAAR (Suite of Tools for Assessing AI Risk) is a monthly subscription service that provides Debevoise clients with an online suite of tools to help them with their AI adoption. Please contact us at STAARinfo@debevoise.com for more information.
The amazing cover art for this blog was generated by Gemini 3 Nano Banana Pro.