Because media is constantly urging us to use more AI, Professor Ethan Mollick’s recent post that identified 5 Times Not to Use AI caught our attention. After dispensing with the obvious scenarios (e.g., using AI for illegal purposes, in high-stakes situations where errors could be catastrophic, or for decisions that ethically require human work), Professor Mollick offers five situations where he would not recommend using AI:

  1. When you need to learn and synthesize new ideas or information.
  2. When very high accuracy is required.
  3. When you do not understand the failure modes of AI (which doesn’t fail exactly like a human).
  4. When the effort is the point.
  5. When AI is bad at that particular task.

Professor Mollick’s observation at the end of the post for individual AI users is wise:

Knowing when to use AI turns out to be a form of wisdom, not just technical knowledge . . . AI is often most useful where we’re already expert enough to spot its mistakes, yet least helpful in the deep work that made us experts in the first place. It works best for tasks we could do ourselves but shouldn’t waste time on, yet can actively harm our learning when we use it to skip necessary struggles.

We took the professor’s great insights; transformed them into observations for when businesses should decide not to use AI; supplemented our observations with the lessons learned through our years of counseling clients on AI adoption; and created the following Top 10 list for when businesses should not use AI.

  1. When the acceptable error rate is essentially zero.

Example: Having AI draft a legal brief for an important court filing.

There is zero tolerance for submitting fabricated cases, regulations, or quotations to a court, which means that everything generated by the AI tool for a filing must be double-checked for accuracy, completeness, and applicability, thereby almost certainly undoing any efficiencies gained by using AI.

  1. When quality checking will take more time than the AI saves.

Example: Using AI to extract large volumes of numerical data to feed a model where errors are hard to detect and could lead to poor results for the model. 

Even when errors do not need to be eliminated entirely, businesses should avoid using AI when failing to control the error rate could cause material harm and where errors—even if tolerable—may be hard to identify at scale.

  1. When learning the subject is as important as the content being created.

Example: Having AI draft a client alert on a new regulation that will be important to ongoing work for several clients.

It is often critical for lawyers and compliance professionals to digest the intricacies of a new regulation so that they can skillfully advise clients as to whether it applies and how to ensure compliance. This mastery generally cannot be achieved by asking an LLM to summarize a new regulation and simply reading the summary.

  1. When the false-positive rate is too high.

Example: Using AI to identify and penalize customer service representatives who yell at customers. But the AI can’t tell who is actually yelling (which is improper conduct that only occurs infrequently) and who is appropriately speaking loudly because a customer is hearing-impaired or is in a noisy location (which happens frequently).

If the use case results in more false positives than true hits, there is a risk that either (a) any efficiencies gained will be outweighed by the need to review and confirm the false positives as not being true hits, or (b) many of the hits won’t be reviewed at all because there are too many of them.

  1. When a basic automation tool will achieve the same task without the costs or risks of AI.

Example: Using AI to generate one of 10 possible NDA contracts, when a simple automation decision tree can achieve the same results without any risk of hallucinations, drift, or bias.

Many problems that generative AI is used for can be solved using traditional AI or automation algorithms, which often are less expensive, are better understood, carry fewer risks, and are more reliable.

  1. When the volume of work does not justify the AI set up costs, regulatory risks, and compliance burdens.

Example: Using an AI resume screening tool to rank resumes for jobs that only receive a dozen applicants.

With a hat tip to whomever created this great analogy (we can’t remember who it was): AI is sometimes like a dishwasher when you don’t have a lot of dirty dishes. To put an AI use case into production, you might need to onboard a new AI tool, provide training to users, run a pilot program, manage the relevant data, provide ongoing monitoring, etc. So, if the task that the AI is supposed to do is small and is already done well by humans, it might just be easier to wash the dishes by hand.

  1. When use of the AI is “icky.”

Example: Using AI to watch an interview and determine whether a candidate is trustworthy or has leadership skills based on their body language.

In addition to carrying significant regulatory risks, having an AI tool infer important human characteristics from biometric data like voice tone, facial expressions, and body language should be avoided because it carries significant reputational risk.

  1. When you haven’t done enough stress testing for adverse use.

Example: Allowing employees access to an internal chatbot that answers their compliance questions without testing to see whether repeatedly asking the same question with slightly different prompts will eventually result in the chatbot stating that a prohibited behavior is permitted.

Some users treat AI differently than they would treat humans answering the same question. An employee who does not like an answer they receive from their chief compliance officer is unlikely to ask the question 20 more times with slightly different wording, hoping to get their desired answer. However, this is common behavior when someone knows that they are interacting with an AI chatbot. Therefore, it is important to stress test AI tools for these types of misuses before deployment.

  1. When human authenticity is important.

Example: Using AI to generate the monthly video address from the CEO to employees.

While AI can tackle many corporate tasks, any efficiency gained by using it may be more than offset if human-to-human connection or authenticity is a vital component of the task.

  1. When the use case will require a lot of AI customization, but a fit-for-purpose AI product from a vendor is likely coming soon.

Example: A law firm building its own e-discovery tool and training lawyers on how to use it, when a fit-for-purpose commercial tool is likely to be available soon.

Building or customizing an in-house AI tool often requires significant time and resources. This investment makes sense if the use case is particularly valuable to the business and no similar tools are likely to be available in the near term. However, in many cases, the costs and risks associated with being an AI developer may not be worth it, especially if a vendor is likely to develop a superior tool that is designed specifically for the intended use case, which will allow costs and risks to be pooled among the developer and all the users.

The authors would like to thank Debevoise Law Clerk Achutha Raman for his contribution to this blog post.

***

To subscribe to the Data Blog, please click here.

The cover art used in this blog post was generated by DALL-E.

Author

Charu A. Chandrasekhar is a litigation partner based in the New York office and a member of the firm’s White Collar & Regulatory Defense and Data Strategy & Security Groups. Her practice focuses on securities enforcement and government investigations defense and cybersecurity regulatory counseling and defense. Charu can be reached at cchandra@debevoise.com.

Author

Avi Gesser is Co-Chair of the Debevoise Data Strategy & Security Group. His practice focuses on advising major companies on a wide range of cybersecurity, privacy and artificial intelligence matters. He can be reached at agesser@debevoise.com.

Author

Matthew Kelly is a litigation counsel based in the firm’s New York office and a member of the Data Strategy & Security Group. His practice focuses on advising the firm’s growing number of clients on matters related to AI governance, compliance and risk management, and on data privacy. He can be reached at makelly@debevoise.com

Author

Josh Goland is an associate in the Litigation Department.

Author

Andreas Constantine Pavlou is an associate in the Litigation Department.