We have previously written about the legal risks that companies face if they oversell the capabilities of their AI systems, known as “AI washing.” In particular, the FTC and the SEC have each recently made clear they are focused on AI washing as a priority for investigations and enforcement. First, the FTC warned businesses that it may use its authority under Section 5 of the FTC Act to bring enforcement actions against companies making deceptive AI-related claims, including companies that:

  • Exaggerate what their AI systems can actually do;
  • Make claims about their AI systems that do not have scientific support or apply only under limited conditions;
  • Make unfounded promises that their AI systems do something better than non-AI systems or a human;
  • Fail to identify known likely risks associated with their AI systems; or
  • Claim that one of their products or services utilizes AI when it does not.

The FTC is also holding a virtual summit on January 25, 2024 to discuss key developments in AI and how those developments impact competition and consumers. The SEC also recently issued a warning similar to the FTC’s to companies. On December 5, 2023, speaking at the “AI: Balancing Innovation and Regulation” summit in Washington, D.C., SEC Chair Gary Gensler analogized “AI washing” to “greenwashing,” declaring that “[o]ne shouldn’t greenwash and one shouldn’t AI wash.” Chair Gensler cautioned businesses to avoid making misleading claims about their AI capabilities or the extent to which they are using AI.

As more companies adopt generative AI as part of their core business functions, certain well-known risks associated with generative AI persist (e.g., quality control, privacy, IP, data-use limitations, cybersecurity, bias, and transparency). At the same time, companies should be aware of additional regulatory, reputational, and litigation risks arising from any claims about AI programs that do not match the current capabilities of these technologies.

Regulatory Risks

In August of 2023, the FTC filed a complaint against Automators AI (“Automators”) alleging that certain of Automators’ claims about its AI tools were unfounded and caused consumer harm in violation of Section 5 of the FTC Act, which prohibits unfair or deceptive acts or practices. Automators had claimed that its AI tools could assist in generating thousands of dollars in monthly profits for its clients; however, the majority of the clients did not even recoup their initial investments. The FTC asserted that this disparity demonstrated that Automators’ claims with respect to its AI product were misleading. State attorneys general have similar consumer protection enforcement powers to pursue instances of alleged “AI washing.”

The SEC has the power to investigate and bring enforcement actions involving material misstatements or omissions of fact in the offer or sale of any security or in connection with the purchase or sale of any security. Such material misstatements or omissions may arise in the context of advertising and marketing-related communications by any SEC registrant (a public company, investment adviser, or broker-dealer) or private securities offering that do not align with the entity’s actual practices. Given Chair Gensler’s analogies between greenwashing and AI washing, the SEC’s ESG rulemaking and enforcement provide instructive context for its potential approach to AI washing. In September 2023, the SEC adopted amendments to the “Names Rule” under the Investment Company Act of 1940 to strengthen the regulatory framework involving registered funds that specialize in certain sectors or investment themes, “such as the incorporation of one or more Environmental, Social, or Governance factors.” The SEC has also charged firms for misstatements and omissions regarding ESG considerations in investment decisions and for misleading ESG statements in public disclosures and filings.

In light of its concerns over AI marketing, among other things, the SEC has stated that AI is one of its examination priorities for 2024, and the Division of Examinations recently conducted an examination sweep regarding the use of AI technologies by investment advisers focused on advertising, disclosures, marketing, and promotion of AI by registrants.

Federal government contractors are also likely to be subject to scrutiny for their AI-related marketing to government agencies under the False Claims Act (the “FCA”), which we have discussed previously in the context of cybersecurity.

Litigation Risks

In addition to regulatory scrutiny, companies also face the prospect of civil litigation regarding misrepresentations relating to their use of AI. For example, Zillow is facing a class action securities fraud lawsuit brought by its shareholders who allege that they were misled by overly optimistic claims made by the company about its house-pricing Zillow Offers tool. That system leveraged AI to estimate home prices and make cash offers for certain properties but allegedly turned out to be unreliable in forecasting home prices, partly because of changes in market dynamics due to the pandemic, which resulted in significant losses for the company and a decline in its stock price. The case is currently set for a 10-day jury trial in June 2025.

In addition to shareholders, consumers are able to bring class actions under tort, contract, and other bodies of law for alleged losses resulting from overselling AI. For example, consumers have brought a class action against a self-driving car company, alleging negligence, negligent misrepresentation, fraud, breach of warranty, unjust enrichment, and violations of California false advertising and consumer protection laws for allegedly misrepresenting the capabilities of the company’s autonomous driving technology. Similarly, insurer Lemonade settled in 2022 a consumer class action that brought breach of contract and consumer protection claims for allegedly using AI to collect, store, and analyze customer biometric information in violation of its terms of service providing that the company would not do so.  Companies should expect that commercial litigation risk will continue as the use of AI matures.

Takeaways

With the increased regulatory and litigation focus on the marketing of AI, companies should consider the following measures to limit the regulatory, litigation, and reputational risks associated with AI washing:

  • Defining AI for Marketing Purposes. Not all models or algorithms qualify as AI. Consider developing a definition of AI for marketing purposes that limits the risk that simple automated systems will be mischaracterized as AI in advertising or other public-facing materials.
  • Training. Provide training for internal marketing and business development personnel on the legal and reputational risks of overselling AI capabilities.
  • Vetting. Consider implementing a protocol requiring any public statement or disclosure made about the company’s AI systems to be reviewed by someone from the company’s legal and technology functions to ensure that the statements are accurate.
  • Monitoring. For existing public statements or disclosures about the company’s AI systems, consider periodically reviewing the associated AI product or service to ensure that the public statement or disclosure remains accurate, and making any appropriate adjustments.
  • Vendor Considerations. For AI systems that are provided to the company by a vendor, be careful not to merely repeat vendor claims about the AI system without taking appropriate steps to verify each claim’s accuracy.
  • Risk Assessments: For high-risk AI systems, companies should consider conducting impact assessments to determine foreseeable risks and how best to mitigate those risks and then consider disclosing those risks in external statements about the AI systems.

To subscribe to the Data Blog, please click here.

The Debevoise AI Regulatory Tracker (DART) is now available for clients to help them quickly assess and comply with their current and anticipated AI-related legal obligations, including municipal, state, federal, and international requirements.

The cover art used in this blog post was generated by DALL-E.

Author

Charu A. Chandrasekhar is a litigation partner based in the New York office and a member of the firm’s White Collar & Regulatory Defense and Data Strategy & Security Groups. Her practice focuses on securities enforcement and government investigations defense and cybersecurity regulatory counseling and defense.

Author

Avi Gesser is Co-Chair of the Debevoise Data Strategy & Security Group. His practice focuses on advising major companies on a wide range of cybersecurity, privacy and artificial intelligence matters. He can be reached at agesser@debevoise.com.

Author

Paul D. Rubin is a corporate partner based in the Washington, D.C. office and is the Co-Chair of the firm’s Healthcare & Life Sciences Group and the Chair of the FDA Regulatory practice. His practice focuses on FDA/FTC regulatory matters. He can be reached at pdrubin@debevoise.com.

Author

Kristin Snyder is a litigation partner and member of the firm’s White Collar & Regulatory Defense Group. Her practice focuses on securities-related regulatory and enforcement matters, particularly for private investment firms and other asset managers.

Author

Melissa Runsten is a corporate associate and a member of the Healthcare & Life Sciences Group. Her practice focuses on FDA/FTC regulatory matters and includes the representation of drug, device, food, cosmetic and other consumer product companies. She can be reached at mrunsten@debevoise.com.

Author

Gabriel Kohan is a litigation associate at Debevoise and can be reached at gakohan@debevoise.com.

Author

Jarrett Lewis is an associate and a member of the Data Strategy and Security Group. He can be reached at jxlewis@debevoise.com.