We’ve been doing a lot of work recently on agentic AI workflows. In this post, we share some of our thinking on how to assess their risks and benefits. At least for now, while AI can be very helpful in automating certain discrete aspects of professional services, it cannot replace entire jobs because, for the successful completion of most complex workflows, the AI agents still need us more than we need them.
- Top 10 Reasons NOT to Use AI (2/19/2025)
While the media is constantly urging us to use more AI, we offer our Top 10 observations for when businesses should decide not to use AI, including when the acceptable error rate is essentially zero, and when learning the subject is as important as the content being created. Throughout this post, we supplement our observations with the lessons we’ve learned through years of counseling clients on AI adoption.
While allowing employees to use generative AI (“GenAI”) comes with significant risks, one of the biggest AI risks in 2025 is not letting employees use GenAI tools at all. In an era when powerful GenAI tools are available on personal devices, outright bans may drive usage underground—mirroring how firms once tried to ban mobile‐messaging only to push it into unmonitored channels. In this Debevoise Client Update, we discuss lessons for AI adoption from our experience with off-channel text communications, using AI tools for recording, transcribing, and summarizing meetings (“AI meeting tools”) as an illustrative example of the risks of being too conservative with GenAI adoption.
In this post, we provide a broad overview of the various tools and features that are available in ChatGPT Enterprise and how we have found them useful for our work. We highlight capabilities such as collaborative drafting tools (Canvas), the Deep Research mode that delivers multi-step memos with citations, projects and chat-sharing for team workflows, plus using memory and custom GPTs to incorporate specific preferences.
Until recently, the primary concern over the use of AI for work was confidentiality. But with the widespread availability of enterprise-grade AI tools with strict cybersecurity controls that do not train on inputs or reference data, many of those concerns have been addressed. Now, the biggest challenge associated with AI adoption is probably quality control, and in particular, a type of AI output known as workslop—content that is AI-generated for a work-related task, which appears authoritative and well-researched, but actually contains errors, lacks substance, merely repackages already stated concepts in different words, or is not fit for purpose. In this post, we provide eight concrete steps to reduce the risks of Workslop, including assuming ownership of AI-assisted content, and requiring expert sign-off and disclosure of AI’s role.
Most companies have implemented protocols for when an employee emails confidential information to the wrong person. A new version of that problem occurs when an employee uploads sensitive information to a consumer (i.e., not enterprise) AI tool, which gives rise to important questions, such as whether the data be clawed back or deleted, whether humans at the AI provider can view it, and whether any contractual or regulatory notification obligations have been triggered. In this post, we address these questions and offer best practices for how to prepare for such events—and how to respond when they happen.
Many businesses use customer-tracking technology and other tools—such as pixels, session replay, software development kits (“SDKs”), and chatbots—to improve website user experiences, understand customer behavior, train their technology, and gauge effectiveness of advertisements. Increasingly, however, these technologies present litigation risks under the California Invasion of Privacy Act (“CIPA”). In this blog post, we provide an overview of the technologies that plaintiffs most commonly target for CIPA lawsuits and measures that companies can take to mitigate their CIPA litigation risk.
No one really knows how the large language models (“LLMs”) that power GenAI tools like ChatGPT actually come up with their answers to our queries. This is referred to as the “black box” or the “explainability” problem, and it is often given as a reason why GenAI should not be used for making certain kinds of decisions like who should get a job interview, a mortgage, a loan, insurance, or admission to a college. In this blog post, we identify several categories of GenAI decision explainability and attempt to provide a specific name for each (e.g., model explainability vs. process explainability vs. data explainability). We also explore which kinds of explainability are knowable for certain common GenAI decisions and which are not and compare that to human decision-making. We then argue that for most GenAI decisions, a level of explainability on par with what is expected with human decision making is currently achievable.
GenAI innovations are rapidly transforming the formulation, analysis, and delivery of investment advice. Many broker-dealers and investment advisers are embracing GenAI to support one or more parts of the investment lifecycle. One new focus is agentic AI: the use of AI to complete more than one task, either in series or in parallel, without any human involvement. In this post, we explore the use of agentic AI in the investment selection process and how it could be one of the most transformative yet challenging applications of GenAI in financial services. This post describes the applicable regulatory framework and analyzes risks oof overreliance on AI in retail investment recommendations and advise without meaningful human review or transparency. We also offer strategies for mitigating those risks, including updating policies and procedures to address the use of GenAI in providing investment recommendations or advice, instituting human review checkpoints, and ensuring accurate and complete disclosures about the role of AI in the investment process.
This year, we have observed a significant increase in AI adoption among our clients. In this post, we provide five reasons why some businesses are accelerating their use of AI and eight factors that impact whether those efforts succeed. While caution and delay in adopting AI was a prudent strategy for many companies in 2023–24, being too careful in 2025–26 creates a significant risk of being left behind, which is why many companies that have been on the sidelines have recently decided it’s time to get into the AI game.
- AI Discrimination Risk in Lending: Lessons from the Massachusetts AG’s Recent $2.5 Million Settlement (7/20/2025)
Using AI to make important decisions about individuals carries a risk of bias, especially for underwriting, credit, employment, and educational admission decisions. In this post, we discuss how a July 2025 settlement by the Massachusetts Attorney General’s Office highlights the risks that can arise in AI-powered lending decisions and ways to reduce those risks.
- What Exactly is an “AI System” or an “AI Solution?” Debevoise’s Practical Definitions for Common AI Terms (11/23/2025)
In this post, we share practical definitions of common AI terms that we use when we assist our clients with AI adoption to ensure that employees understand the terms that we use. The definitions are meant to be a starting point for a dialogue among AI practitioners on workable terms, and we are continually refining them. Successful AI adoption is extremely challenging, but having a simple, shared vocabulary for what we are all trying to do should not be.
As we look to 2026, we predict that NDAs and other contractual use limitations will become a significant problem for enterprise AI adoption. In this blog post, we outline key considerations for assessing and addressing contractual restrictions on high-quality internal non-public data – an essential step towards unlocking the full value of AI.
On August 2, 2025, the second wave of requirements under the EU AI Act (the “Act”) entered into force. This latest set of requirements primarily covers General Purpose AI (“GPAI”) model providers, as well as containing operational requirements for the EU and Member State oversight and enforcement bodies. These new obligations represent significant milestones in shaping the EU’s evolving interpretation of and approach to the Act. In this post, we highlight five key things businesses should know, including the requirements for high-risk AI systems, which come into force on August 2, 2026.
***
Summarized by Lily Schoen, Ella Han, and Caroline Moore. We used ChatGPT to help generate first drafts of the summaries. The cover art for this blog was generated by Gemini 3 Nano Banana Pro.
To subscribe to the Debevoise Data Blog, please click here.
The Debevoise STAAR (Suite of Tools for Assessing AI Risk) is a monthly subscription service that provides Debevoise clients with an online suite of tools to help them with their AI adoption. Please contact us at STAARinfo@debevoise.com for more information.