Registered investment advisers (“RIAs”) have swiftly embraced AI for investment strategy, market research, portfolio management, trading, risk management, and operations.  In response to the exploding use of AI across the securities markets, Chair Gensler of the Securities and Exchange Commission (“SEC”) has declared that he plans to prioritize securities fraud in connection with AI disclosures and warned market participants against “AI washing.”  Chair Gensler’s statements reflect the SEC’s sharpening scrutiny of AI usage by registrants.  The SEC’s Division of Examinations included AI as one of its 2024 examination priorities, and also launched a widespread AI sweep of RIAs focused on AI in connection with advertising, disclosures, investment decisions, and marketing.  The SEC previously charged an RIA in connection with misleading Form ADV Part 2A disclosures regarding the risks associated with its use of an AI-based trading tool.

Early AI disclosures by registrants typically only included generalized references to the use of aggregated or “big” data, algorithmic analysis, and machine learning.  But with the rapid and widespread adoption of AI, disclosures have started to include more specific references to AI tools and models and the use of AI to make predictions, anticipate trends, develop investment themes, and inform trading decisions.  As AI tools continue to multiply and AI adoption continues to expand, we expect to see more Part 2A disclosures specifically addressing AI—as well as an increase in the SEC’s interest in testing the accuracy of the statements in these disclosures.

Accordingly, RIAs preparing to file their annual Form ADV amendments should prepare for enhanced examination and enforcement scrutiny of their Part 2A disclosures about AI.  RIAs considering how to make Part 2A AI disclosures should consider the following best practices:

  • Be clear on what you do (and don’t) use AI for: AI usage varies widely among RIAs as a function of each adviser’s business model, investment strategy, and asset allocation.  For instance, a private equity manager’s use of AI may differ significantly from the AI used by a hedge fund manager for more liquid investments.  For this reason, there is no “one-size-fits-all” AI disclosure and RIAs must be able to accurately articulate their AI use cases—and to avoid understatement or overstatement.  For example, if an adviser’s use of AI is limited to operational efficiency enhancements and it is not used for investment-related decisions, an RIA should not aspirationally overstate its use of AI to cover unrelated and nonexistent uses such as trading or investment research.  Similarly, if an RIA starts deploying AI in any way to support trading or investment decisions, it may wish to consider updating existing disclosures relating to its investment process.
  • Avoid using hypothetical language for actual AI practices: Using hypothetical language to indicate the possibility of a certain AI use case can give rise to both examination and enforcement scrutiny.  RIAs should avoid using hypothetical or qualifying language like “may” to describe AI use cases that actually exist.  For instance, firms that use AI to help make investment decisions should avoid purely hypothetical descriptions of AI usage in these disclosures.  The SEC has brought numerous cases against RIAs for using hypothetical language to describe actual practices, and these enforcement actions will serve as a template for future SEC inquiries involving AI practices.  Given the SEC’s likely enhanced scrutiny of AI disclosures, RIAs should carefully consider whether to include such disclosures and, if so, how to frame them to avoid claims that those disclosures are misleading.  In addition, an RIA that uses such hedging language cannot “set-it-and-forget-it,” and should consider updating such language in future filings if AI use transforms from theoretical to actual.
  • Understand and accurately disclose the risks associated with AI use: As more firms adopt AI (including generative AI) as part of their core business functions, certain well-known risks associated with generative AI persist, such as quality control, privacy, IP, data-use limitations, cybersecurity, bias, and transparency.  Accordingly, disclosures should be clear, comprehensive, and precise about such risks.  As seen in matters involving the improper use of hypothetical language, the SEC has charged firms for using hypothetical language to describe risks that have materialized, and RIAs should similarly exercise caution and accuracy in disclosing AI risks that have emerged.

To subscribe to the Data Blog, please click here.

The cover art used in this blog post was generated by DALL-E.

Author

Charu A. Chandrasekhar is a litigation partner based in the New York office and a member of the firm’s White Collar & Regulatory Defense and Data Strategy & Security Groups. Her practice focuses on securities enforcement and government investigations defense and cybersecurity regulatory counseling and defense.

Author

Avi Gesser is Co-Chair of the Debevoise Data Strategy & Security Group. His practice focuses on advising major companies on a wide range of cybersecurity, privacy and artificial intelligence matters. He can be reached at agesser@debevoise.com.

Author

Kristin Snyder is a litigation partner and member of the firm’s White Collar & Regulatory Defense Group. Her practice focuses on securities-related regulatory and enforcement matters, particularly for private investment firms and other asset managers.

Author

Julie M. Riewe is a litigation partner and a member of Debevoise's White Collar & Regulatory Defense Group. Her practice focuses on securities-related enforcement and compliance issues and internal investigations, and she has significant experience with matters involving private equity funds, hedge funds, mutual funds, business development companies, separately managed accounts and other asset managers. She can be reached at jriewe@debevoise.com.

Author

Marc Ponchione, a partner in Debevoise's Investment Management Group, focuses on advising financial services firms on various regulatory, compliance and transactional issues arising in the asset management industry. He can be reached at mponchione@debevoise.com.

Author

Matthew Kelly is a litigation counsel based in the firm’s New York office and a member of the Data Strategy & Security Group. His practice focuses on advising the firm’s growing number of clients on matters related to AI governance, compliance and risk management, and on data privacy. He can be reached at makelly@debevoise.com

Author

Sheena Paul is a counsel in the Investment Management Group’s U.S. regulatory practice, based in the firm’s Washington, D.C. office. Ms. Paul focuses her practice on providing regulatory advice to investment managers, with a particular focus on private equity clients. She works closely with the firm’s other practices on regulatory advice related to domestic and cross-border corporate and capital markets transactions, and enforcement matters. She can be reached at spaul@debevoise.com

Author

Mengyi Xu is an associate in Debevoise's Litigation Department and a Certified Information Privacy Professional (CIPP/US). As a member of the firm’s interdisciplinary Data Strategy & Security practice, she helps clients navigate complex data-driven challenges, including issues related to cybersecurity, data privacy, and data and AI governance. Mengyi’s cybersecurity and data privacy practice focuses on incident preparation and response, regulatory compliance, and risk management. She can be reached at mxu@debevoise.com.

Author

Ned Terrace is an associate in the Litigation Department. He can be reached at jkterrac@debevoise.com.