1. Why Agentic AI Often Fails and the Enduring Value of Human Judgment (11/4/2025)

We’ve been doing a lot of work recently on agentic AI workflows. In this post, we share some of our thinking on how to assess their risks and benefits. At least for now, while AI can be very helpful in automating certain discrete aspects of professional services, it cannot replace entire jobs because, for the successful completion of most complex workflows, the AI agents still need us more than we need them.

  1. Top 10 Reasons NOT to Use AI (2/19/2025)

While the media is constantly urging us to use more AI, we offer our Top 10 observations for when businesses should decide not to use AI, including when the acceptable error rate is essentially zero, and when learning the subject is as important as the content being created. Throughout this post, we supplement our observations with the lessons we’ve learned through years of counseling clients on AI adoption.

  1. In 2025, One of the Biggest AI Risks Is Not Letting Employees Use AI – Lessons from Off-Channel Communications (7/7/2025)

While allowing employees to use generative AI (“GenAI”) comes with significant risks, one of the biggest AI risks in 2025 is not letting employees use GenAI tools at all. In an era when powerful GenAI tools are available on personal devices, outright bans may drive usage underground—mirroring how firms once tried to ban mobile‐messaging only to push it into unmonitored channels. In this Debevoise Client Update, we discuss lessons for AI adoption from our experience with off-channel text communications, using AI tools for recording, transcribing, and summarizing meetings (“AI meeting tools”) as an illustrative example of the risks of being too conservative with GenAI adoption.

  1. Features Available in ChatGPT Enterprise for Lawyers and How to Use Them (7/15/2025)

In this post, we provide a broad overview of the various tools and features that are available in ChatGPT Enterprise and how we have found them useful for our work. We highlight capabilities such as collaborative drafting tools (Canvas), the Deep Research mode that delivers multi-step memos with citations, projects and chat-sharing for team workflows, plus using memory and custom GPTs to incorporate specific preferences.

  1. Eight Ways to Reduce the Biggest AI Risk in 2025: Workslop (10/5/2025)

Until recently, the primary concern over the use of AI for work was confidentiality. But with the widespread availability of enterprise-grade AI tools with strict cybersecurity controls that do not train on inputs or reference data, many of those concerns have been addressed. Now, the biggest challenge associated with AI adoption is probably quality control, and in particular, a type of AI output known as workslop—content that is AI-generated for a work-related task, which appears authoritative and well-researched, but actually contains errors, lacks substance, merely repackages already stated concepts in different words, or is not fit for purpose. In this post, we provide eight concrete steps to reduce the risks of Workslop, including assuming ownership of AI-assisted content, and requiring expert sign-off and disclosure of AI’s role.

  1. An Employee Just Uploaded Sensitive Data to a Consumer AI Tool – Now What? (4/16/2025)

Most companies have implemented protocols for when an employee emails confidential information to the wrong person. A new version of that problem occurs when an employee uploads sensitive information to a consumer (i.e., not enterprise) AI tool, which gives rise to important questions, such as whether the data be clawed back or deleted, whether humans at the AI provider can view it, and whether any contractual or regulatory notification obligations have been triggered. In this post, we address these questions and offer best practices for how to prepare for such events—and how to respond when they happen.

  1. CIPA Litigation: Trends Regarding Tracking Technology and AI (6/4/2025)

Many businesses use customer-tracking technology and other tools—such as pixels, session replay, software development kits (“SDKs”), and chatbots—to improve website user experiences, understand customer behavior, train their technology, and gauge effectiveness of advertisements. Increasingly, however, these technologies present litigation risks under the California Invasion of Privacy Act (“CIPA”). In this blog post, we provide an overview of the technologies that plaintiffs most commonly target for CIPA lawsuits and measures that companies can take to mitigate their CIPA litigation risk.

  1. AI Explainability Explained: When the Black Box Matters and When It Doesn’t (7/21/ 2025)

No one really knows how the large language models (“LLMs”) that power GenAI tools like ChatGPT actually come up with their answers to our queries. This is referred to as the “black box” or the “explainability” problem, and it is often given as a reason why GenAI should not be used for making certain kinds of decisions like who should get a job interview, a mortgage, a loan, insurance, or admission to a college. In this blog post, we identify several categories of GenAI decision explainability and attempt to provide a specific name for each (e.g., model explainability vs. process explainability vs. data explainability). We also explore which kinds of explainability are knowable for certain common GenAI decisions and which are not and compare that to human decision-making. We then argue that for most GenAI decisions, a level of explainability on par with what is expected with human decision making is currently achievable.

  1. Agentic AI In Retail Investing: Navigating Regulatory and Operational Risk (10/29/2025)

GenAI innovations are rapidly transforming the formulation, analysis, and delivery of investment advice. Many broker-dealers and investment advisers are embracing GenAI to support one or more parts of the investment lifecycle. One new focus is agentic AI: the use of AI to complete more than one task, either in series or in parallel, without any human involvement. In this post, we explore the use of agentic AI in the investment selection process and how it could be one of the most transformative yet challenging applications of GenAI in financial services. This post describes the applicable regulatory framework and analyzes risks oof overreliance on AI in retail investment recommendations and advise without meaningful human review or transparency. We also offer strategies for mitigating those risks, including updating policies and procedures to address the use of GenAI in providing investment recommendations or advice, instituting human review checkpoints, and ensuring accurate and complete disclosures about the role of AI in the investment process.

  1. Why Businesses Are Accelerating AI Adoption and Eight Hallmarks of Success (9/8/2025)

This year, we have observed a significant increase in AI adoption among our clients. In this post, we provide five reasons why some businesses are accelerating their use of AI and eight factors that impact whether those efforts succeed. While caution and delay in adopting AI was a prudent strategy for many companies in 2023–24, being too careful in 2025–26 creates a significant risk of being left behind, which is why many companies that have been on the sidelines have recently decided it’s time to get into the AI game.

  1. AI Discrimination Risk in Lending: Lessons from the Massachusetts AG’s Recent $2.5 Million Settlement (7/20/2025)

Using AI to make important decisions about individuals carries a risk of bias, especially for underwriting, credit, employment, and educational admission decisions. In this post, we discuss how a July 2025 settlement by the Massachusetts Attorney General’s Office highlights the risks that can arise in AI-powered lending decisions and ways to reduce those risks.

  1. What Exactly is an “AI System” or an “AI Solution?” Debevoise’s Practical Definitions for Common AI Terms (11/23/2025)

In this post, we share practical definitions of common AI terms that we use when we assist our clients with AI adoption to ensure that employees understand the terms that we use. The definitions are meant to be a starting point for a dialogue among AI practitioners on workable terms, and we are continually refining them. Successful AI adoption is extremely challenging, but having a simple, shared vocabulary for what we are all trying to do should not be.

  1. AI’s Biggest Enterprise Challenge in 2026: Contractual Use Limitations on Data (11/17/2025)

As we look to 2026, we predict that NDAs and other contractual use limitations will become a significant problem for enterprise AI adoption. In this blog post, we outline key considerations for assessing and addressing contractual restrictions on high-quality internal non-public data – an essential step towards unlocking the full value of AI.

  1. The Second Wave of EU AI Act Requirements are In Force: Five Things Business Should Know (8/5/2025)

On August 2, 2025, the second wave of requirements under the EU AI Act (the “Act”) entered into force. This latest set of requirements primarily covers General Purpose AI (“GPAI”) model providers, as well as containing operational requirements for the EU and Member State oversight and enforcement bodies. These new obligations represent significant milestones in shaping the EU’s evolving interpretation of and approach to the Act. In this post, we highlight five key things businesses should know, including the requirements for high-risk AI systems, which come into force on August 2, 2026.

***

Summarized by Lily Schoen, Ella Han, and Caroline Moore. We used ChatGPT to help generate first drafts of the summaries. The cover art for this blog was generated by Gemini 3 Nano Banana Pro.

To subscribe to the Debevoise Data Blog, please click here.

The Debevoise STAAR (Suite of Tools for Assessing AI Risk) is a monthly subscription service that provides Debevoise clients with an online suite of tools to help them with their AI adoption. Please contact us at STAARinfo@debevoise.com for more information.

Author

Charu A. Chandrasekhar is a litigation partner based in the New York office and a member of the firm’s White Collar & Regulatory Defense and Data Strategy & Security Groups. Her practice focuses on securities enforcement and government investigations defense and artificial intelligence and cybersecurity regulatory counseling and defense. Charu can be reached at cchandra@debevoise.com.

Author

Andrew J. Ceresney is a partner in the New York office and Co-Chair of the Litigation Department. Mr. Ceresney represents public companies, financial institutions, asset management firms, accounting firms, boards of directors, and individuals in federal and state government investigations and contested litigation in federal and state courts. Mr. Ceresney has many years of experience prosecuting and defending a wide range of white collar criminal and civil cases, having served in senior law enforcement roles at both the United States Securities and Exchange Commission and the U.S. Attorney’s Office for the Southern District of New York. Mr. Ceresney also has tried and supervised many jury and non-jury trials and argued numerous appeals before federal and state courts of appeal.

Author

Courtney M. Dankworth is a litigation partner who focuses her practice on internal investigations and regulatory defense, including banking enforcement actions and disputes related to financial services and consumer finance.

Author

Avi Gesser is Co-Chair of the Debevoise Data Strategy & Security Group. His practice focuses on advising major companies on a wide range of cybersecurity, privacy and artificial intelligence matters. He can be reached at agesser@debevoise.com.

Author

Robert Kaplan is a litigation partner based in the firm’s Washington, D.C. office. He has significant experience with a broad range of securities-related enforcement and compliance issues, including those involving requirements affecting SEC-registered investment advisers affiliated with hedge funds, private equity funds, investment companies, mutual funds and separately managed accounts. Mr. Kaplan routinely represents these clients in matters before the SEC, state attorneys general and FINRA. He is recommended by Chambers USA (2020), and, in 2017, Securities Docket recognized him as one of the “best and brightest in securities enforcement defense.”

Author

Gordon Moodie is a partner in the firm’s New York office and member of the Mergers & Acquisitions Group, Private Equity Group and the Technology, Media and Telecommunications Group, as well as the Public Company Advisory Group and Corporate Governance practice

Author

Jim Pastore is a Debevoise litigation partner and a member of the firm’s Data Strategy & Security practice and Intellectual Property Litigation Group. He can be reached at jjpastore@debevoise.com.

Author

Julie M. Riewe is a litigation partner and a member of Debevoise's White Collar & Regulatory Defense Group. Her practice focuses on securities-related enforcement and compliance issues and internal investigations, and she has significant experience with matters involving private equity funds, hedge funds, mutual funds, business development companies, separately managed accounts and other asset managers. She can be reached at jriewe@debevoise.com.

Author

Jeffrey L. Robins is a corporate partner and a member of the Debevoise Banking Group. His practice focuses on representing broker-dealers, swap dealers, banks, securities exchanges, industry associations and buy-side institutions in regulatory and transactional matters. He can be reached at jlrobins@debevoise.com.

Author

Kristin Snyder is a litigation partner and member of the firm’s White Collar & Regulatory Defense Group. Her practice focuses on securities-related regulatory and enforcement matters, particularly for private investment firms and other asset managers.

Author

Matthew Kelly is a litigation counsel based in the firm’s New York office and a member of the Data Strategy & Security Group. His practice focuses on advising the firm’s growing number of clients on matters related to AI governance, compliance and risk management, and on data privacy. He can be reached at makelly@debevoise.com

Author

Johanna Skrzypczyk (pronounced “Scrip-zik”) is a counsel in the Data Strategy and Security practice of Debevoise & Plimpton LLP. Her practice focuses on advising AI matters and privacy-oriented work, particularly related to the California Consumer Privacy Act. She can be reached at jnskrzypczyk@debevoise.com.

Author

Karen Levy is the Chief Information Officer at Debevoise and serves on the firm's AI Governance Committee.

Author

Diane C. Bernabei is an associate in the Litigation Department. She can be reached at dcbernabei@debevoise.com.

Author

Michael Bloom is an associate in the Litigation Department. He can be reached at mjbloom@debevoise.com.

Author

HJ Brehmer is a Debevoise litigation associate and a member of the Data Strategy & Security Group. Her practice focuses on cybersecurity incident preparation and response, internal investigations, civil litigation, and regulatory defense. She can be reached at hjbrehmer@debevoise.com.

Author

Suchita Mandavilli Brundage is a former associate in the Debevoise Data Strategy & Security Group.

Author

Melyssa Eigen is an associate in the Litigation Department. She can be reached at meigen@debevoise.com.

Author

Josh Goland is an associate in the Litigation Department.

Author

Martha Hirst is an associate in Debevoise's Litigation Department based in the London office. She is a member of the firm’s White Collar & Regulatory Defense Group, and the Data Strategy & Security practice. She can be reached at mhirst@debevoise.com.

Author

Gabriel Kohan is a litigation associate at Debevoise and can be reached at gakohan@debevoise.com.

Author

Carl Lasker is an associate in the Litigation Department. He can be reached at calasker@debevoise.com.

Author

Jeremy Liss is an associate in the Litigation Department. He can be reached at jiliss@debevoise.com.

Author

Andreas Constantine Pavlou is a former associate in the Litigation Department.

Author

Joshua Plastrik is an associate in the Litigation Department. He can be reached at jhplastrik@debevoise.com.

Author

Achutha Raman is a law clerk in the Litigation Department. He can be reached at anraman@debevoise.com.

Author

Adam Shankman is an associate in the Litigation Department. He can be reached at adshankm@debevoise.com.

Author

Stephanie D. Thomas is an associate in the Litigation Department and a member of the firm’s Data Strategy & Security Group and the White Collar & Regulatory Defense Group. She can be reached at sdthomas@debevoise.com.

Author

Nathaniel Waldman is an associate in the Litigation Department. He can be reached at ndwaldma@debevoise.com.

Author

Annabella Waszkiewicz is a former law clerk in the Litigation Department.

Author

Stan Gershengoren is a Director, Practice & Business Systems at Debevoise. He can be reached at sgershen@debevoise.com.

Author

William Sadd is the Head of Practice and AI Systems at Debevoise. He can be reached at wjsadd@debevoise.com.

Author

Nicholas T. Ziebell is a manager of Practice & AI Systems at Debevoise. He can be reached at nziebell@debevoise.com.

Author

Patty is a virtual AI specialist in the Debevoise Data Strategy and Security Group. She was created on May 3, 2025, using OpenAI's o3 model.

Author

Sergio is a virtual specialist in the Data Strategy Group at Debevoise. He was created on May 3, 2025 using OpenAI's o3 model by Avi Gesser.