On February 27, 2023, the FTC released guidance entitled “Keep Your AI Claims in Check” (“AI Claims Blog Post”), reminding companies that false or unsubstantiated claims about a product’s efficacy are core areas of FTC enforcement activity. We have previously written on how the FTC has entered into a new era under FTC Chair Lina Khan. It has asserted its authority to regulate AI under Section 5 of the FTC Act, the Fair Credit Reporting Act, and the Equal Credit Opportunity Act, and it has imposed remedies such as algorithmic disgorgement on models that it concluded used unfair and/or deceptive data collection practices. The FTC’s April 2021 blog post on truth, fairness, and equity for use of AI warned companies not to exaggerate what their algorithm can do, and not to be misleading about their use of data. Now the FTC is clearly signaling that it intends to pursue claims with respect to false advertising for artificial intelligence products and services.

Statements About AI that Companies Should Avoid

In the AI Claims Blog Post, the FTC characterizes “artificial intelligence” as a hot marketing term that is susceptible to overuse and abuse, especially because it has many possible definitions and may be used to describe different products and services, with varying degrees of faithfulness to the actual underlying technology. The guidance lists a few areas of focus for the FTC in evaluating AI-related advertising claims:

  • Exaggerations as to what an AI product can actually do, noting that making trustworthy predictions of human behavior is extremely difficult, and that performance claims would be considered deceptive if they lack scientific support, or if they apply only to certain types of users or under certain conditions.
  • Promises that an AI product does something better than a non-AI product, noting that companies will need adequate proof for any comparative claims, and if such proof is impossible to get, then the claim should not be made.
  • Identification of foreseeable risks, noting that, if something goes wrong because the AI fails or yields biased results, then companies may not be able to shift blame to third-party developers of the technology, nor will they be able to deny responsibility because the technology is a closed box.
  • Whether the products actually utilize AI at all, noting that FTC technologists can investigate the underlying model to assess the accuracy of the company’s claims, and that using an AI tool in the development process is not the same as a product having AI in it.

The FTC’s Authority to Regulate Deceptive Claims

The FTC has long been focused on deceptive advertising under its Section 5 authority. It has issued guidance for advertising and marketing on the internet, reiterating its prior policy statements on Deception and Substantiation. In interpreting Section 5 of the Act, a representation, omission or practice is deceptive if: (1) the representation, omission, or practice is likely to mislead consumers acting reasonably under the circumstances; and (2) the representation is material.

The FTC’s Policy Statement Regarding Advertising Substantiation requires a “reasonable basis” in support of claims, typically based upon “the amount of substantiation experts in the field believe is reasonable. ” For AI claims, companies should evaluate on a case-by-case basis whether AI experts should be consulted in order to assess whether claim substantiation is sufficient.

For AI in particular, the FTC has brought cases in the areas of privacy and data security, including as they relate to companies’ representations about data and safeguards associated with their AI models. In light of the latest AI Claims Blog Post, we can expect the FTC to ramp up its scrutiny and enforcement actions on AI from the advertising angle as well.

Other Legal Risks Associated with Inaccurate Statements about AI

There are other legal frameworks that prohibit misleading statements with respect to AI products and services, including securities laws and the False Claims Act. For example, the home-listing company Zillow had to wind down its “Zillow Offers” business because its housing-price-prediction-AI tool did not accurately forecast house prices, which resulted in millions of dollars of losses. This led to Zillow facing a securities class action suit alleging that the company made materially false and misleading statements in its SEC filings about its financial outlook. The case is still ongoing.

The SEC has also warned about misleading claims for algorithms in its Guidance on Robo-Advisors, which focused on the need for clear and adequate disclosures about the capabilities, limitations, and risks inherent with these tools. The SEC examined robo-advisors’ marketing and performance advertising for compliance with the Advertising Rule. It found that advisers often provided inadequate or insufficient disclosures, made vague or unsubstantiated claims, and used “materially misleading performance advertisements.”

In addition, companies that are federal government contractors should also consider risks under the False Claims Act (the “FCA”).  Recently, a California-based aerospace and defense contractor agreed to pay $9 million to resolve allegations that it had misrepresented compliance with cybersecurity requirements in certain federal government contracts. While this particular action was brought as part of the DOJ’s newly-minted Civil Cyber-Fraud Initiative, it is certainly possible that similar actions could be brought for AI-related misrepresentations in government contracting in the future. State Attorneys General may similarly bring false advertising claims predicated on violations of state deceptive trade practices or other consumer protection laws.

Takeaways

  1. AI Definition: Consider creating an internal definition of what can be appropriately characterized as AI, to avoid allegations that the Company is falsely claiming that a product or service utilizes artificial intelligence, when it merely uses an algorithm or simple non-AI model. This is a particularly complicated exercise because there is no generally accepted definition of AI, and U.S. regulators themselves often use very broad definitions of AI such as “the capability of a machine to imitate intelligent human behavior” or “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.” Even the FTC in its guidance pointed out that “AI” is an ambiguous term. Having a reasonable internal definition of AI that is used consistently for advertising of AI products and services may reduce the risk of deceptive marking allegations.
  2. Inventory: Consider creating an inventory of public statements about the company’s AI products and services.
  3. Education: Educate your marketing compliance teams on the FTC guidance and on the issues with the definition of AI. Because this is a nascent issue, marketing compliance teams may not understand the regulatory focus on these claims.
  4. Review: Consider having a process for reviewing all current and proposed public statements about the company’s AI products and services to ensure that they are accurate, can be substantiated, and do not exaggerate or overpromise. The review process may include both a legal and a technical review. Data and AI counsel may wish to take a more active role in marketing review as this field develops.
  5. Vendor Claims: For AI systems that are provided to the company by a vendor, be careful not to merely repeat vendor claims about the AI system without ensuring their accuracy. Include such vendor claims in the review process above.
  6. Risk Assessments: For high-risk AI applications, companies should consider conducting impact assessments to determine foreseeable risks and how best to mitigate those risks, and then consider disclosing those risks in external statements about the AI applications.

To subscribe to the Data Blog, please click here.

The Debevoise Artificial Intelligence Regulatory Tracker (DART) is now available for clients to help them quickly assess and comply with their current and anticipated AI-related legal obligations, including municipal, state, federal, and international requirements.

The cover art used in this blog post was generated by DALL-E.

Author

Avi Gesser is Co-Chair of the Debevoise Data Strategy & Security Group. His practice focuses on advising major companies on a wide range of cybersecurity, privacy and artificial intelligence matters. He can be reached at agesser@debevoise.com.

Author

Erez is a litigation partner and a member of the Debevoise Data Strategy & Security Group. His practice focuses on advising major businesses on a wide range of complex, high-impact cyber-incident response matters and on data-related regulatory requirements. Erez can be reached at eliebermann@debevoise.com

Author

Jim Pastore is a Debevoise litigation partner and a member of the firm’s Data Strategy & Security practice and Intellectual Property Litigation Group. He can be reached at jjpastore@debevoise.com.

Author

Paul D. Rubin is a corporate partner based in the Washington, D.C. office and is the Co-Chair of the firm’s Healthcare & Life Sciences Group and the Chair of the FDA Regulatory practice. His practice focuses on FDA/FTC regulatory matters. He can be reached at pdrubin@debevoise.com.

Author

Christopher S. Ford is a counsel in the Litigation Department who is a member of the firm’s Intellectual Property Litigation Group and Data Strategy & Security practice. He can be reached at csford@debevoise.com.

Author

Anna R. Gressel is an associate and a member of the firm’s Data Strategy & Security Group and its FinTech and Technology practices. Her practice focuses on representing clients in regulatory investigations, supervisory examinations, and civil litigation related to artificial intelligence and other emerging technologies. Ms. Gressel has a deep knowledge of regulations, supervisory expectations, and industry best practices with respect to AI governance and compliance. She regularly advises boards and senior legal executives on governance, risk, and liability issues relating to AI, privacy, and data governance. She can be reached at argressel@debevoise.com.

Author

Mengyi Xu is an associate in Debevoise's Litigation Department and a Certified Information Privacy Professional (CIPP/US). As a member of the firm’s interdisciplinary Data Strategy & Security practice, she helps clients navigate complex data-driven challenges, including issues related to cybersecurity, data privacy, and data and AI governance. Mengyi’s cybersecurity and data privacy practice focuses on incident preparation and response, regulatory compliance, and risk management. She can be reached at mxu@debevoise.com.

Author

Melissa Muse is an associate in the Litigation Department based in the New York office. She is a member of the firm’s Data Strategy & Security Group, and the Intellectual Property practice. She can be reached at mmuse@debevoise.com.