Online customer service chatbots have been around for years, allowing companies to triage customer queries with pre-programmed responses that addressed customers’ most common questions. Now, Generative AI (“GenAI”) chatbots have the potential to change the customer service landscape by answering a wider variety of questions, on a broader range of topics, and in a more nuanced and lifelike manner. Proponents of this technology argue companies can achieve better customer satisfaction while reducing costs of human-supported customer service. But the risks of irresponsible adoption of GenAI customer service chatbots, including increased litigation and reputational risk, could eclipse their promise.

We have previously discussed risks associated with adopting GenAI tools, as well as measures companies can implement to mitigate those risks. In this Debevoise Data Blog post, we focus on customer service chatbots and provide some practices that can help companies avoid legal and reputational risk when adopting such tools.

The Legal Landscape

There are few laws that directly regulate how companies can use GenAI chatbots to assist with customer service, although it is a growing area of interest for regulators both in the United States and internationally.

On March 13, 2024, Utah became one of the first U.S. states to explicitly regulate the use of AI chatbots with the enactment of the Artificial Intelligence Policy Act (the “UT AIPA”). The UT AIPA is a relatively simple regulation that is aimed at protecting consumer rights in the context of customer-facing AI tools. It contains two provisions that companies utilizing GenAI chatbots must abide by:

  • The first is a transparency measure: if asked, all companies must inform consumers that they are interacting with GenAI and not a human, and those in regulated occupations must proactively disclose when a consumer is interacting with AI (or with materials generated by AI).
  • The second is a measure that prevents companies from asserting as a defense to liability under Utah consumer protection law that a GenAI system, and not the company deploying it, was responsible for any violations.

The UT AIPA is just the latest example of regulatory focus to encourage transparency and hold companies accountable for the actions of the AI systems that they employ, including both GenAI chatbots and automated decision-making tools. Other examples include proposed regulations such as the EU AI Act and the UK AI Regulation Bill, as well as guidelines from the California State Bar and the Federal Trade Commission (the “FTC”).

Even in the absence of AI-specific regulations, regulators may be able to act against companies based on harms caused by their use of customer service chatbots and other AI tools. In a December 2023 Joint Statement of Enforcement, four federal agencies (the Consumer Financial Protection Bureau, the Justice Department, the Equal Employment Opportunity Commission, and the Federal Trade Commission) promised to “vigorously” use their existing legal authority to bring enforcement against AI tools that “perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes.” Examples of such existing authorities include:

  • Federal Unfair or Deceptive Acts and Practices (“UDAP”) laws. Both the FTC and the Consumer Financial Protection Bureau (the “CFPB”) have characterized AI-related activities as a source of potential UDAP liability and have emphasized that they will aggressively police AI-supported activities when they regard them as involving unfair or deceptive treatment of consumers. The CFPB notes that when “chatbots provide inaccurate information regarding a consumer financial product or service, there is potential to cause considerable harm,” which could lead to consumers selecting the wrong product or service or an incorrect assessment of fees or penalties.
  • Anti-Discrimination Statutes. Errors in information provided to customers that implicate a protected class or group of individuals, such as a GenAI customer service chatbot that gives incorrect information pertaining to legally guaranteed rights or benefits, could give rise to liability under anti-discrimination rules and regulations.

These federal authorities operate alongside state laws and regulations, which are administered by state and local authorities and which include consumer welfare statutes and analogs to federal UDAP laws that also could apply to GenAI tools.

Risks of GenAI Chatbots

Companies should be aware of regulatory and litigation risks associated with consumer-facing chatbots that could arise as a result of customer complaints to government agencies, whistleblowers, or regulatory exams. These risks are especially elevated for companies that operate in heavily regulated industries, such as banking and insurance. Among other risks to contemplate for a potential chatbot roll-out, companies should consider that:

  • Chatbots will hallucinate and make errors. Hallucination, bias, and inaccurate responses are common. While there are ways to reduce these kinds of mistakes, it is unlikely that they will be totally eliminated in the near future. And even a small number of errors on a percentage basis can add up, in both financial and reputational damage, at scale. Humans also make mistakes, but they are often not the same kinds of mistakes that GenAI systems would make. So, even if the GenAI chatbots make fewer errors, they could be more serious, of a greater magnitude, or distributed differently than human-made errors. For example, if a chatbot’s errors tended to be more in favor of the company than human errors, that could be a source of increased regulatory and reputational risk.
  • GenAI comes with a lack of transparency. If a customer claims to have been deceived or subjected to an unfair act or practice by a chatbot, it may be hard to audit the underlying reasoning to rebut any such claims. This risk is especially pointed if the only responses that are provided for important questions or for decisions that affect a customer’s rights and liberties come from a chatbot that has not been built to explain its thinking.
  • A company can be bound by what their chatbot tells consumers. Even if the chatbot “hallucinates” or otherwise provides an incorrect output, companies may be held liable for promises and statements made on their behalf to customers. Whether a particular statement will bind the provider of a chatbot is likely to be a matter of common law doctrines (including jurisdiction-specific contract law), but in the absence of fraud or unclean hands on the part of the consumer, courts may well enforce bargains based on the apparent authority that companies give their customer-facing AI service tools. For example, in a highly publicized case, a court in Canada held an airline liable when a chatbot on the airline’s website incorrectly promised a customer a discount that was not actually being offered by the airline. Although the chatbot provided a link to the underlying policy (which would have reflected that the discount was not in fact available), the court found that the company was still “responsible for all the information on its website,” including the incorrect chatbot output.
  • Laws still apply when a customer is talking to a chatbot. Companies can be held liable for violations of antidiscrimination, consumer rights, and privacy laws, even if the violation comes from a chatbot and not a human employee.

Practices to Consider When Adopting AI Chatbots

Companies should consider the above risks when deciding how and when to use GenAI in their customer service experience. Before adopting any AI tools, companies should consider developing AI policies and pilot programs to help decide whether the tools, including both external-facing chatbots and internal employee tools, are appropriate. If a company does choose to introduce GenAI customer service chatbots, they should consider one or more of the following measures to help mitigate risks to both the company and its customers:

  • Inform consumers that they are talking with a chatbot. Consider informing consumers when they are directly interacting with a GenAI chatbot and not talking to a human representative. Also consider providing customers with the option to connect directly with a human representative.
  • Ensure that chatbots are safe and accurate. Prior to implementing any GenAI customer service tools, companies should consider conducting extensive testing to confirm that the chatbot provides accurate, unbiased, and proper responses. Testing should also be done on an ongoing basis after the AI tools go live. Regulators are particularly likely to disfavor the use of GenAI customer service chatbots that cause customers to receive inaccurate information when such errors are non-trivial and potentially avoidable. Accordingly, companies should strive to only develop and deploy chatbots that will improve the overall customer experience.
  • Consider whether GenAI chatbots are appropriate for complicated high-impact inquiries. GenAI chatbots are most useful when assisting a company with responding to a large number of low-impact queries. If a company’s service representatives are often faced with unique, challenging, and high-impact questions, a GenAI chatbot might not be an appropriate solution.

For such high-impact inquiries, companies should consider one or more of the following:

  • telling consumers that answers provided by the chatbot may not be accurate and that customers must verify all information that comes from the chatbot themselves;
  • stating expressly that the chatbot is not empowered to enter into any agreements, or to make any changes to any existing agreements, on behalf of the company;
  • including terms of use that provide that any benefits provided by the chatbot may not be honored by the company if they are inconsistent with company policy and are not also provided elsewhere on the company website;
  • using chatbots as complex search engines that point customers to excerpts from pre-approved company documents or policy pages that should answer the customer’s questions, rather than generating a new answer to the question;
  • architecting chatbots with guardrails or other technical means of limiting the kinds of responses the chatbot will provide; or
  • having automatic escalation procedures that cause such inquiries to be resolved by a human employee, who may use the chatbot as an internal tool to help answer the customers’ inquiry.

***

To subscribe to the Data Blog, please click here.

The cover art used in this blog post was generated by DALL-E.

Author

Avi Gesser is Co-Chair of the Debevoise Data Strategy & Security Group. His practice focuses on advising major companies on a wide range of cybersecurity, privacy and artificial intelligence matters. He can be reached at agesser@debevoise.com.

Author

Jim Pastore is a Debevoise litigation partner and a member of the firm’s Data Strategy & Security practice and Intellectual Property Litigation Group. He can be reached at jjpastore@debevoise.com.

Author

Matthew Kelly is a litigation counsel based in the firm’s New York office and a member of the Data Strategy & Security Group. His practice focuses on advising the firm’s growing number of clients on matters related to AI governance, compliance and risk management, and on data privacy. He can be reached at makelly@debevoise.com

Author

Gabriel Kohan is a litigation associate at Debevoise and can be reached at gakohan@debevoise.com.

Author

Melissa Muse is an associate in the Litigation Department based in the New York office. She is a member of the firm’s Data Strategy & Security Group, and the Intellectual Property practice. She can be reached at mmuse@debevoise.com.

Author

Josh Goland is a law clerk in the Litigation Department.