Many public companies are starting to face increased risks of securities class action litigation based on statements about their use of AI that are alleged to have been false or misleading.  We have previously written about the legal risks that companies face if they oversell the capabilities of their AI systems, which is known as “AI washing.” In particular, the SEC has stated that AI is one of its examination priorities for 2024, and recently brought its first AI-related fraud cases.

Now, AI-related securities class actions are beginning to emerge.  For example, on February 21, 2024, shareholders brought a securities class action against Innodata Inc., its CEO, and other corporate officers, for allegedly violating Sections 10(b) and 20(a) of the Securities Exchange Act of 1934 and Rule 10b-5 thereunder.  The complaint alleges that Innodata falsely represented to investors and advertised that it used AI-powered operations for data preparation, when it actually relied on off-shore manual labor—not proprietary AI technology—to digitize medical records and insurance data, and underfunded its AI research and development.  The complaint is based on assertions in a short seller’s research report that corresponded with an over 30% drop in the company’s stock price, which undoubtedly drew attention from the plaintiffs’ bar.

Moreover, as we previously wrote, Zillow is facing a securities class action lawsuit for allegedly misleading shareholders with overly optimistic claims regarding its house-pricing Zillow Offers tool.  That tool used AI to estimate home prices and make cash offers for certain properties.  However, it allegedly turned out to be unreliable in forecasting home prices, partly because of changes in market dynamics due to the pandemic, which allegedly resulted in significant losses for the company, the wind-down of the Zillow Offers business, and a decline in the company’s stock price.  Lead plaintiff’s motion for class certification is pending, and the case is currently set for a 10-day jury trial in June 2025.

AI-related securities class actions are likely to become more frequent as public companies increasingly start disclosing how they use AI in their public filings.  Shareholder plaintiffs can scrutinize these disclosures in hindsight to contend that the company did not properly characterize its AI technologies or use by, for example, failing to disclose an AI use case that actually existed or omitting references to an associated risk of generative AI such as quality control, privacy, IP, data-use limitations, cybersecurity, bias, or transparency.  Given the likely enhanced scrutiny of AI disclosures by future shareholder plaintiffs, companies should carefully consider whether to make such AI-related disclosures and, if so, how to frame them to avoid claims that those disclosures are misleading.

The current excitement over AI has many similarities to the rise of dot-com stocks in the late 1990s.  When that bubble burst in the early 2000s, it resulted in a wave of class action securities cases against tech companies, as well as other market participants who had publicly promoted them.  Like many of the dot-com companies, some publicly traded AI companies today have significant valuations without substantial revenues.  Should the AI bubble also burst, companies, officers, and analysts may face a similar spate of securities fraud class action lawsuits from shareholders.

What Might Be Considered Misleading?

To state a claim for securities fraud, a private plaintiff must allege (among other factors) an intentional or reckless misstatement or omission of material fact.  In considering what kinds of statements about AI use could be viewed as misleading within the meaning of  federal securities laws, companies should focus on recent statements by Gary Gensler, Chair of the Securities and Exchange Commission, at Yale Law School:

As AI disclosures by SEC registrants increase, the basics of good securities lawyering still apply. Claims about prospects should have a reasonable basis, and investors should be told that basis. When disclosing material risks about AI—and a company may face multiple risks, including operational, legal, and competitive—investors benefit from disclosures particularized to the company, not from boilerplate language.

Chair Gensler further stated that AI washing may violate securities laws, signaling a focus on statements that may oversell a company’s AI capabilities or practices.

The FTC’s recent guidance related to AI disclosures is also instructive.  The FTC stated that it may use Section 5 of the FTC Act to bring enforcement actions against companies making deceptive AI-related claims, including companies that:

  • exaggerate what their AI systems can actually do;
  • make claims about their AI systems that do not have scientific support or apply only under limited conditions;
  • make unfounded promises that their AI systems do something better than non-AI systems or a human;
  • fail to identify known likely risks associated with their AI systems; or
  • claim that one of their products or services utilizes AI when it does not.

Takeaways for Mitigating Securities Fraud Class Action Risk

Public companies may want to consider embedding the following AI governance practices into their existing disclosure practices to limit the risk of possible securities fraud class actions:

  • Define AI Consistently and Truthfully. To avoid claims of misrepresenting AI or AI usage, consider creating a definition of AI that is used for both internal and external purposes and aligns to the company’s actual AI capabilities and use cases.  Doing so will mitigate the risk that the company will characterize something as AI externally that is not considered AI internally – a misalignment that could be interpreted as misleading.
  • Ensure Appropriate Technical and Legal Review of All Current and Proposed Public Statements About AI. This review should involve individuals with AI expertise and be focused on ensuring that disclosures are accurate, can be substantiated, and do not exaggerate or overpromise.
  • Maintain Robust Risk Disclosures. Precautionary risk disclosures regarding AI or the use of AI may reduce securities litigation risk, such as by disclosing the risk that the AI will periodically hallucinate and fail to work properly.  For example, in securities class actions arising from cyber incidents and data loss, companies have successfully argued that past statements regarding their cybersecurity programs were not misleading because their SEC risk disclosures cautioned that their systems were vulnerable to theft, loss, or fraudulent use of company and customer data and were susceptible to breaches, including by experiencing security incidents in the past (such as in In re Marriott Int’l, Inc., 31 F.4th 898, 903 (4th Cir. 2022))
  • Conduct AI Risk Assessments. For high-risk AI systems, consider conducting impact assessments to determine foreseeable risks and how best to mitigate those risks, and then consider disclosing those risks in external statements about the AI systems.

***

To subscribe to the Data Blog, please click here.

The cover art used in this blog post was generated by DALL-E.

Author

Maeve O’Connor is Co-Chair of the Firm’s Securities Litigation Practice and Chair of the Firm’s Insurance Litigation Practice and spent six years as a member of the firm’s Management Committee. She has significant experience in defending securities litigation and in representing financial services companies in a range of litigation and regulatory matters.

Author

Avi Gesser is Co-Chair of the Debevoise Data Strategy & Security Group. His practice focuses on advising major companies on a wide range of cybersecurity, privacy and artificial intelligence matters. He can be reached at agesser@debevoise.com.

Author

Jim Pastore is a Debevoise litigation partner and a member of the firm’s Data Strategy & Security practice and Intellectual Property Litigation Group. He can be reached at jjpastore@debevoise.com.

Author

Kristin Snyder is a litigation partner and member of the firm’s White Collar & Regulatory Defense Group. Her practice focuses on securities-related regulatory and enforcement matters, particularly for private investment firms and other asset managers.

Author

Charu A. Chandrasekhar is a litigation partner based in the New York office and a member of the firm’s White Collar & Regulatory Defense and Data Strategy & Security Groups. Her practice focuses on securities enforcement and government investigations defense and cybersecurity regulatory counseling and defense.

Author

Matthew Kelly is a litigation counsel based in the firm’s New York office and a member of the Data Strategy & Security Group. His practice focuses on advising the firm’s growing number of clients on matters related to AI governance, compliance and risk management, and on data privacy. He can be reached at makelly@debevoise.com

Author

Gabriel Kohan is a litigation associate at Debevoise and can be reached at gakohan@debevoise.com.

Author

Alexandra Mogul is a corporate associate and a member of the firm’s Financial Institutions Group. Ms. Mogul’s practice focuses on consumer finance and banking regulatory, transactional and compliance matters. She regularly advises banks, FinTechs, industry trade associations and other firms on a variety of regulatory, transactional and compliance matters relating to federal and state banking regulations, Consumer Financial Protection Bureau regulations and guidance, as well as related state consumer financial protection and licensing requirements. Ms. Mogul’s practice also includes advising financial institutions on anti-money laundering, broker-dealer, cybersecurity and data privacy issues. She can be reached at anmogul@debevoise.com.

Author

Josh Goland is an associate in the Litigation Department.