The proliferation of AI tools and rapid pace of AI adoption have led to calls for new regulation at all levels. President Biden recently said “[w]e need to manage the risks [of AI] to our society, to our economy, and our national security.” The Senate Judiciary Subcommittee on Privacy, Technology and the Law recently held a hearing on “Rules for Artificial Intelligence” to discuss the need for AI regulation, while Senate Majority Leader Schumer released a strategy to regulate AI.

The full benefits of AI can only be realized by ensuring that AI is developed and used responsibly, fairly, securely, and transparently to establish and maintain public trust. But it is critical to find the right mix of high-level principles, concrete obligations, and governance commitments for effective AI regulation.

As the leading industry trade association representing broker-dealers, investment banks, and asset managers operating in the U.S. and global capital markets, SIFMA has proposed a practical, risk-based approach to regulating AI that contains strong accountability measures for high-risk AI uses, while providing flexibility to allow industry to innovate. At its core, the SIFMA approach would require companies, under the supervision of their sectoral regulators, to (1) identify how AI is being used, (2) determine which AI uses pose the highest risks, (3) have qualified persons or committees at the company review high-risk AI applications and determine whether the risks are too high, and if so, (4) provide meaningful mitigation steps to reduce those risks to an acceptable level or require that the AI application be abandoned.

To achieve these objectives, any AI regulation should include the following components:

  1. Scoping. Companies should determine which AI applications are in scope of the framework when building their governance programs.
  2. Inventory. Companies should prepare and maintain an inventory of their AI applications with sufficient detail to allow them to be risk rated.
  3. Risk Rating. Companies should have a process for identifying their highest-risk AI applications. The risks considered would include legal and regulatory risks, including operational, reputational, contractual, discrimination, cybersecurity, privacy, consumer harm, lack of transparency, and confidentiality risks.
  4. Responsible Persons or Committees. Companies should designate one or more individuals or committees who are responsible for identifying and assessing their highest-risk AI applications, and either accepting those risks, mitigating them, or abandoning the particular AI application because the risks are too high.
  5. Training. Companies should develop training programs to ensure that stakeholders are able to identify the risks associated with their AI use and the various options for reducing risk.
  6. Documentation. Companies should maintain documentation sufficient for an audit of the risk assessment program.
  7. Audit. Companies should conduct periodic audits that focus on the effectiveness of the risk assessment program, rather than on individual AI applications. Companies should be permitted to determine how and when audits should be conducted, and who can conduct those audits.
  8. Third-Party Risk Management. Companies should use the same risk-based principles that are applied to in-house AI applications to evaluate third-party AI applications, and mitigate those risks through diligence, audits, and contractual terms.

This proposed framework could be incorporated into existing governance and compliance programs in related areas such as model risk, data governance, privacy, cybersecurity, vendor management, and product development, with further guidance from applicable sectoral regulators as needed. Further, having qualified persons identify, assess, and mitigate the risks associated with the highest-risk AI uses improves accountability, appropriate resource allocation, and employee buy-in through clearly defined and fair processes.

Given the rapid rate of AI adoption and its potential societal impact, policymakers are facing increased pressure to enact AI regulation. SIFMA’s risk-based approach would provide a valuable, flexible framework through which companies and their sectoral regulators can build tailored AI governance and compliance programs that ensure accountability and trust without stifling innovation or wasting time or resources on low-risk AI applications.

Debevoise & Plimpton assisted SIFMA in preparing its response to the National Telecommunications and Information Administration Request for Comment on AI Accountability Policy, which is the basis for this blog post.

The authors would like to thank Debevoise Summer Law Clerk Esther Tetruashvily for her contribution to this blog post.

SIFMA’s version of this blog post is available here.

To subscribe to the Data Blog, please click here.

The cover art used in this blog post was generated by DALL-E.

Author

Melissa MacGregor is Deputy General Counsel and Corporate Secretary for SIFMA.

Author

Avi Gesser is Co-Chair of the Debevoise Data Strategy & Security Group. His practice focuses on advising major companies on a wide range of cybersecurity, privacy and artificial intelligence matters. He can be reached at agesser@debevoise.com.

Author

Matthew Kelly is a litigation counsel based in the firm’s New York office and a member of the Data Strategy & Security Group. His practice focuses on advising the firm’s growing number of clients on matters related to AI governance, compliance and risk management, and on data privacy. He can be reached at makelly@debevoise.com

Author

Stephanie D. Thomas is an associate in the Litigation Department and a member of the firm’s Data Strategy & Security Group and the White Collar & Regulatory Defense Group. She can be reached at sdthomas@debevoise.com.

Author

Ned Terrace is a law clerk on the Litigation Department.