As we approach the end of the year, here are the Top 10 Artificial Intelligence (“AI”) posts on the Debevoise Data Blog in 2023 by page views. If you are not already a Blog subscriber, click here to sign up.
In April 2023, the European Parliament reached a political deal to advance the European Union’s groundbreaking AI Act (the “EU AI Act”), bringing Europe one step closer to enacting the world’s first comprehensive AI regulatory framework. While the EU is poised to become the first jurisdiction to take this step, other countries are not far behind. In late 2022 and early 2023, the U.S., Canada, Brazil, and China all introduced measures that illustrate their respective goals and approaches to regulating AI. In this post, we provide an overview of these legislative developments, highlighting key similarities, differences and trends between each country’s approach as well as providing a few considerations for companies deploying significant AI systems.
2. The Value of Having AI Governance – Lessons from ChatGPT (April 5, 2023)
We had previously written about how many companies were implementing a pilot program for ChatGPT, as a follow up to our article about companies adopting a policy for the work-related uses of generative AI tools like ChatGPT, Bard and Claude (which we collectively refer to as “Generative AI”). We discussed how a pilot program often involves designating a small group of employees who test potential Generative AI use cases, and then make recommendations to a cross-functional AI governance committee that determines (1) which use cases are prohibited and which are permitted, and (2) for the permitted use cases, what restrictions, if any, should apply. In this article, we discuss how the process of running a Generative AI pilot program or adopting a broader Generative AI policy has resulted in companies learning several lessons about AI adoption in general.
3. The Final Colorado AI Insurance Regulations: What’s New and How to Prepare (October 3, 2023)
On September 21, 2023, the Colorado Division of Insurance (the “DOI”) released its Final Governance and Risk Management Framework Requirements for Life Insurers’ Use of External Consumer Data and Information Sources, Algorithms, and Predictive Models (the “Final Regulation”). In this post we discuss the Final Regulation, how it differs from the Draft Regulation, and what companies should be doing now to prepare for compliance. We also include a redline comparing the changes between the Final and Draft regulations.
4. Eight GDPR Questions when Adopting Generative AI (October 10, 2023)
As businesses adopt Generative AI tools, they need to ensure that their governance frameworks address not only AI-specific regulations such as the forthcoming EU AI Act, but also existing regulations, including the EU and UK GDPR. In this post, we outline eight questions businesses may want to ask when developing or adopting new Generative AI tools or when considering new use cases involving GDPR-covered data. At their core, they highlight the importance of integrating privacy-by-design default principles into Generative AI development and use cases (see here).
5. The Top Eight AI Adoption Failures and How to Avoid Them (June 14, 2023)
Over the past three years, we have observed many companies in a wide range of sectors adopt Artificial Intelligence (“AI”) applications for a host of promising use cases. In some instances, however, those efforts have ended up being less valuable than anticipated—and in a few cases, were abandoned altogether—because certain risks associated with adopting AI were not properly considered or addressed before or during implementation. In this post, we examine how the manifestation of these risks can lead to AI adoption “failure” and identify ways companies can mitigate these risks to achieve their goals when implementing AI applications.
6. Does Your Company Need a ChatGPT Pilot Program? Probably. (March 20, 2023)
We previously wrote about how many companies probably need a policy for Generative AI tools like ChatGPT, Bard and Claude (which we collectively refer to as “ChatGPT”). We discussed how employees were using ChatGPT for work and the various risks of allowing all employees at a company to use ChatGPT without any restrictions. We then provided some suggestions for ways that companies could reduce these risks. Since then, it has become clearer that many companies are not ready to implement a formal ChatGPT policy for reasons we discuss here. In this post, we write about how some companies are launching a Generative AI Pilot Program for several weeks that is effectively serving as an interim ChatGPT policy.
7. NYC’s AI Hiring Law Is Now Final and Effective July 5, 2023 (April 12, 2023)
The New York City Department of Consumer and Worker Protection (the “DCWP”) has adopted final rules (the “Final Rules”) regulating the use of artificial intelligence for hiring practices. The DCWP’s Automated Employment Decision Tool Law (the “AEDT Law” or the “Law”) requires covered employers to conduct annual independent bias audits and to post public summaries of those results. The DCWP released an initial set of proposed rules on September 23, 2022, and held a public hearing on November 4, 2022. Due to the high volume of comments expressing concern over the Law’s lack of clarity, the DCWP issued a revised set of proposed rules on December 23, 2022, and held a second public hearing on January 23, 2023. The Final Rules largely adopt the December proposal with a few notable changes addressing concerns raised during the second public hearing. In this post, we discuss the current state of the AEDT Law and highlight how the final changes impact employers’ compliance obligations.
8. Achieving Sensible AI Regulation (July 17, 2023)
The proliferation of AI tools and rapid pace of AI adoption have led to calls for new regulation at all levels. President Biden recently said “[w]e need to manage the risks [of AI] to our society, to our economy, and our national security.” The Senate Judiciary Subcommittee on Privacy, Technology and the Law recently held a hearing on “Rules for Artificial Intelligence” to discuss the need for AI regulation, while Senate Majority Leader Schumer released a strategy to regulate AI. The full benefits of AI can only be realized by ensuring that AI is developed and used responsibly, fairly, securely, and transparently to establish and maintain public trust. But it is critical to find the right mix of high-level principles, concrete obligations, and governance commitments for effective AI regulation. In this post, we talk about some components that an AI regulation should include in order to be a practical, risk-based approach to regulating AI that contains strong accountability measures for high-risk AI uses, while providing flexibility to allow industry to innovate.
9. SEC Proposes Rule to Eliminate or Neutralize Conflicts of Interest in the Use of “Predictive Data Analytics” Technologies (August 14, 2023)
On July 26, 2023, the U.S. Securities and Exchange Commission issued proposed rules (the “Proposed Rules”) that would require broker-dealers and investment advisers (collectively, “firms”) to evaluate their use of predictive data analytics (“PDA”) and other covered technologies in connection with investor interactions and to eliminate or neutralize certain conflicts of interest associated with such use. The Proposed Rules also contain amendments to rules under the Securities Exchange Act of 1934 and the Investment Advisers Act of 1940 that would require firms to have policies and procedures to achieve compliance with the rules and to make and maintain related records. In this post, we discuss the scope of the Proposed Rules and provide a summary of key provisions, as well as some key implications regarding the scope and application of the rules if adopted as proposed. The SEC’s Fall 2023 Regulatory Agenda was posted on December 6, 2023. The SEC has indicated its plans to issue final rules in April 2024.
10. UK Financial Regulators Publish Response to AI Consultation – Seven Takeaways (October 31, 2023)
On 26 October 2023, the Bank of England, Prudential Regulation Authority (“PRA”) and Financial Conduct Authority (“FCA”, collectively the “UK Financial Authorities”) published FS2/23 on Artificial Intelligence and Machine Learning (the “Response Paper”). It summarizes participants’ responses to the October 2022 AI discussion paper (DP5/22, the “Discussion Paper”), which outlined the UK Financial Authorities’ proposed approach to AI regulation. In this post we discuss the key takeaways from the UK regulator’s response paper.