The rapid pace of AI adoption by law firms is resulting in machines doing the work of lawyers, including research, reviewing and summarizing documents, and drafting contracts. Many assume this means that law firms will need fewer lawyers, which may turn out to be true. AI’s potential to revolutionize the practice of radiology has triggered similar scrutiny and concern about future job prospects in that specialty. But a recent article in the Works in Progress Newsletter on Why AI Isn’t Replacing Radiologists provides instructive lessons from radiology that can be applied to the legal profession. As with previous technological innovations in law (e.g., Lexis and Westlaw, TAR and e-discovery), it may be that the amount of legal work that AI absorbs is more than offset by the new legal work that gets created, which can only be done by human lawyers.
Why AI Isn’t Replacing Radiologists
The article starts with a 2016 prediction that, because AI was already good at reading medical scans, studying to become a radiologist was a bad bet. At the time, radiology looked like the perfect candidate for AI replacement: digital inputs, repeatable tasks, and a robust set of training data and benchmarks. But surprisingly, the prediction turns out to have been wrong. In 2025, even though AI is better at reading many scans than most radiologists and the supply of new radiologists is at its historical peak, demand for radiologists is nonetheless at an all-time high. The article offers several reasons for this:
- Testing environments for machines are not the real world. Models may beat doctors on standardized tests, but their performance drops in the real world. Scans can differ from machine to machine and from hospital to hospital. AI models that perform well in test settings therefore may not be as useful when applied in actual radiology practice.
- Human expertise matters for the hard cases. AI may be very good at reading routine scans (which account for most of AI’s training data), but it is not as accurate as trained human radiologists at interpreting edge cases, or at analyzing difficult and rare areas of imaging. Indeed, a human radiologist’s understanding of a patient’s particular risks and demographic factors, coupled with years of diagnostic experience, often results in better interpretations of ambiguities in an image than results from AI.
- Legal and contractual requirements for human review. Regulations, hospital rules, medical ethics, liability management, and malpractice coverage all require a meaningful human review of the AI’s assessment.
- Reading scans is not the job. Only about 1/3 of a radiologist’s day is spent interpreting images. The rest is spent conferring with other clinicians, mentoring and teaching, interpreting results, and discussing their implications for patient care—functions that the machines will not be able to do effectively anytime soon, if at all.
- AI = more scans = more radiologists. Adoption of AI results in faster scans, which gives doctors more options. Scans that used to be exceptional because of long-turnaround times are now routine, which means there are more scans for expert radiologists to review.
Why AI Might Not Replace Lawyers
There are some similarities between the relationship that radiologist and lawyers have with AI, which may mean that the total demand for human legal work may actually increase over the next few years. For example:
- Lawyers don’t just find the law. Like reading a scan for a radiologist, finding the law, the document, or the right contract term for a lawyer is only part of the job. We spend most of our time eliciting the right information from clients, deciding which facts are most relevant, aligning incentives, negotiating, drafting for stress testing and risk tolerance, prioritizing risks and translating risks to decision‑makers in a commercial context. Clients generally don’t seek outside counsel to learn black-letter law in well-settled areas. Rather, clients seek advice on what the law is likely to be in areas of uncertainty or transition, what could be the enforcement or litigation risks of certain actions, and the best steps to reduce those risks. AI will certainly help determine what the law is right now, but it probably won’t be very effective at helping an in-house lawyer feel prepared for an important meeting, navigate the concerns of the business, or correctly anticipate a regulator’s shifting priorities.
- There is a lot of unmet demand for high-end legal work. When most legal research and drafting can be done cheaply using AI, more clients and potential clients will have access to high-end legal services, which will create more demand for human expert lawyers to (1) make sure the AI is right and not missing anything important, (2) explain the AI’s analysis to clients, and (3) affirm the AI’s advice as an expert with the experience and judgment that the client trusts.
- AI will create many new legal issues. We are already seeing that AI adoption is creating dozens of complex new legal issues. For example, we’ve spent hundreds of hours advising clients on the use of AI transcription tools for meetings, including on issues like consents, disclaimers, privacy, confidentiality, privilege waiver, recordkeeping obligations, legal holds, and discovery. All of these are issues that AI is bad at analyzing because the issues are too new; there is not enough for it to learn from that is highly relevant yet. Moreover, AI is not good at anticipating all the complexity of real-world implementation across a variety of different client contexts. Applying existing laws to the use of AI in situations that were not contemplated when the laws were made requires the experience, judgment and wisdom of human lawyers.
- The need for risk shifting. For important decisions, especially in new gray areas of the law, there is an expectation from boards, executives, regulators and judges that an experienced lawyer, in that particular area of law, signed off on the decision and is accepting some of the risk if that decision turns out to be very wrong. At least for the foreseeable future, AI will not be able to fulfill this important role.
At least for now, AI adoption in radiology shows that when certain tasks get cheaper and faster, usage expands, experience and judgment‑heavy work increases, and the need for human experts grows. Law may be on the same trajectory. In a world flooded with near-free high-quality legal advice, clients may actually need more lawyers — to ask the right questions, to weigh new risks, to sign off on final decisions, and to determine when the models are wrong and explain why. Much like AI has raised concerns about de-skilling in radiology, the challenge for law firms is—in a world where a lot of the routine legal work will increasingly be completed by AI—how do we give our junior associates the training and experiences they need to grow into this role of trusted senior advisor effectively?
To subscribe to the Data Blog, please click here.
The Debevoise STAAR (Suite of Tools for Assessing AI Risk) is a monthly subscription service that provides Debevoise clients with an online suite of tools to help them fast track their AI adoption. Please contact us at STAARinfo@debevoise.com for more information.
The cover art used in this blog post was generated by ChatGPT-5.