As we approach the end of the year, here are the Top 11 Artificial Intelligence (“AI”) posts on the Debevoise Data Blog in 2024 by page views. If you are not already a Blog subscriber, click here to sign up.
- Good AI Vendor Risk Management Is Hard, But Doable (September 26, 2024)
As companies slowly ramp up the depth and breadth of their AI adoption, one of the most difficult challenges they face is managing third-party risk. Companies often struggle when deciding how to mitigate – through diligence, contractual conditions, or other means – the risks presented by these third parties. This post surveys key challenges associated with AI vendor risk management and provides tips for designing an effective AI vendor risk management program.
Companies developing internal AI policies often face challenges deciding how to define AI and, relatedly, deciding when AI governance and compliance programs should apply to models outside their chosen definition. In this post, we discuss risks of borrowing ambiguous definitions directly from regulations and outline four alternative approaches that companies may find useful.
- DOJ Updates Guidance on Corporate Compliance Programs to Include AI Risk Management (September 25, 2024)
On September 23, 2024, the U.S. Department of Justice updated its “Evaluation of Corporate Compliance Programs” guidance to federal prosecutors (the “ECCP”) in order to address AI risk management, among other subjects. In this post, we discuss how the DOJ’s guidance differs from prior corporate compliance guidelines and how companies may want to consider those differences.
- Risk of AI Abuse by Corporate Insiders Presents Challenges for Compliance Departments (February 21, 2024)
As companies adopt AI tools in their everyday business practices, the risk of misuse and abuse by employees rises. In this post, we discuss scenarios in which employees may abuse AI technology – such as insider use of deepfakes, information barrier evasion, and model manipulation – and make recommendations for mitigating the risks presented by those scenarios.
- In 2024, the Biggest Legal Risk for Generative AI May Be Hype (January 9, 2024)
Since at least 2023, the FTC and SEC have been actively investigating and enforcing against deceptive AI marketing claims, focusing on those that exaggerate AI system abilities, unsupported claims, and misleading comparisons to non-AI alternatives. Thus, companies face legal risks when overselling AI capabilities, a practice termed “AI washing.” This post discusses regulatory and litigation risks of inaccurately representing AI and provides practical steps for companies to avoid them.
In July 2024, the NYDFS released guidance for AI and external data use in insurance underwriting and pricing, which modified prior proposed language on the topic. In this post, we discuss the NYDFS guidance, consider how it differs from earlier proposed language, and compare its scope to recently enacted Colorado insurance regulations.
- Mitigating AI Risks for Customer Service Chatbots (April 16, 2024)
While chatbots are a longstanding feature of the customer service landscape, companies’ use of generative AI to support their chatbots is a potential new source of risk. Generative AI chatbots can present risks under existing UDAP and anti-discrimination laws, as well as novel risk based on new legislation, such as the March 2024 Utah Artificial Intelligence Policy Act. This post explores some of these potential sources of legal liability and recommends risk mitigation practices when deploying AI to support customer service chatbots.
- Recently Enacted AI Law in Colorado: Yet Another Reason to Implement an AI Governance Program (June 11, 2024)
As AI usage grows, so does regulatory focus on AI governance. Colorado’s passage of Senate Bill 24-205 (the “Colorado AI Law”) is a prime example of the new legal obligations companies should consider when implementing and improving their own AI governance programs. This post explains the scope of the Colorado AI Law, including its focus on so-called “high-risk” AI systems, and considers obligations that it imposes alongside comparable requirements under the EU AI Act. The discussion is important even for firms outside Colorado because, while Colorado was the first state to enact such AI regulation, it is already clear that it will not be the last.
- Preparing for AI Whistleblowers (April 24, 2024)
This post describes an emerging new risk for companies in the form of AI whistleblowers. Increased regulatory scrutiny around AI use from agencies coupled with record-breaking whistleblower activity have set the stage for increased AI enforcement from agencies such as the SEC and the DOJ. Amid this changing landscape, companies should consider how to update relevant AI policies to account for and address potential risks related to AI whistleblowers and regulatory enforcement.
- Guidelines on the Use of Generative AI Tools by Professionals from the American Bar Association (August 5, 2024)
At the end of July, the American Bar Association (the “ABA”) released guidance for ethical use of generative AI by lawyers through its Formal Opinion 512 (the “Opinion”). The Opinion interprets existing ethical considerations and applies them to the context of AI use, covering areas such as improving lawyer training on AI, updating engagement letters, disclosing AI involvement to clients and courts, and developing billing models for AI-assisted legal services. After summarizing the ABA guidelines, this post recommends specific actions to consider that may help support compliance.
After initial predictions of sweeping regulation, a narrowed EU AI Act officially came into effect August 1, 2024, with the majority of its substantive requirements coming into force two years later, in August 2026. This post outlines the scope and high-level requirements for different kinds of AI systems under the Act, while also cautioning that ongoing uncertainty around the details should inform companies’ current preparations for compliance.
****
To subscribe to the Data Blog, please click here.
The cover art used in this blog post was generated by ChatGPT.