Until recently, the primary concern over the use of AI for work was confidentiality. But with the widespread availability of enterprise-grade AI tools with strict cybersecurity controls that do not train on inputs or reference data, many of those concerns have been addressed. Now, the biggest challenge associated with AI adoption is probably quality control, and in particular, a type of AI output known as workslop—content that is AI-generated for a work-related task, which appears authoritative and well-researched, but actually contains errors, lacks substance, merely repackages already stated concepts in different words, or is not fit for purpose.

How Workslop Happens

Junior professional employees are sometimes asked to help solve problems that are beyond their current level of expertise. They often learn new skills, gain confidence, and grow as professionals through such “stretch” assignments. Usually, they are not expected to actually solve the problem on their own (although they may be asked how they would solve the problem if they were). Instead, they are required to gather and analyze the information needed to solve the problem and present their findings, including potential solutions, to a more senior employee who will make the final decision. But some employees are using AI to conduct most or all of that analysis, rather than doing the work themselves. That approach, by itself, may not be a problem, and indeed, with the right controls and formal training programs, it may in some circumstances be desirable.

But it can become a problem if the junior employee lacks the experience to check the output from the model and merely passes on the AI’s output to a more senior employee without disclosure that the work product is AI-generated. Sometimes this occurs when the junior employee has too much work with fast-approaching deadlines, which can make them more inclined to pass on AI-generated content that looks accurate to them, without independently checking all of it thoroughly. This can create significant resource allocation and risk control issues, including:

  1. Misallocation of Resources: If the senior employee reviewing the work product does identify the workslop as content that needs to be fixed or revised, they in turn must conduct research from scratch to ensure the integrity of the analysis and double-check facts or figures—tasks that could have been done by the junior employee to create better work product in the first place. Recent research from BetterUp Labs found that around 40% of workers say they have received this poor-quality AI-generated content in the last month and each instance takes around two hours to clean up—draining productivity and trust in AI.
  2. Increased Risk with Less Mitigation: If, however, the senior employee does not identify the workslop, a significant risk exists that the senior employee will rely upon work product that facially looks accurate and even sophisticated, but is actually wrong or incomplete. If the senior employee relies upon the workslop to make an important decision internally or  shares it with a client, a firm could face reputational damage, as well as possible harm and legal liability. The danger of workslop is that the reviewing employee may be under the mistaken impression that the junior employee has conducted the necessary research to ensure that work product being reviewed is accurate, fit for purpose, and is not missing anything important, and therefore believes that the work deserves deference, when that is not the case.

Eight Ways to Reduce the Risks of Workslop

  1. You Own Your Content (this may not be enough): Most AI policies provide that users are responsible for the quality of any AI-generated content as if they drafted it themselves. But when the associate lacks the expertise or experience to review the AI-generated content, that policy alone is insufficient to address the risk of workslop.
  2. Expert Sign-Off: Some AI policies provide that if an employee does not have the experience or expertise to effectively review the quality, accuracy, completeness, and fitness for purpose of final content created with the assistance of AI, they must find someone else at the firm who can effectively conduct such a review, and they must be told that the work product includes AI-generated content.
  3. Disclosure of AI-Assisted Content: Some AI policies also require that when a document is being shared internally, any AI-generated content must be clearly identified, even if that content has been reviewed and signed off by an employee. This protocol not only discourages silent pass-throughs of AI content, it also alerts the reviewer to review the content with appropriate skepticism. People tend to make different mistakes than AI. For example, AI is unlikely to make spelling mistakes and humans are unlikely to fabricate sources or citations. As AI improves, it tends to make fewer mistakes and the mistakes it does make are harder to catch. Therefore, AI errors or omissions are more likely to be caught by someone with a lot of experience, but it is helpful for the reviewer to know when they are reviewing AI-generated content, versus when they are reviewing content generated by a junior employee after a lot of research and analysis.
  4. Prohibit or Discourage AI Cutting and Pasting: Another option is to allow the use of AI for background research, but either prohibit or strongly discourage their employees from cutting and pasting AI into any document that is used for work. Rather, they are required to draft their work product from scratch, only using the AI to generate ideas or to lead them to trusted sources.
  5. Require Source Material: Other firms allow employees to cut and paste from AI into documents used for work, but strongly encourage that a separate document be created that lists the sources that the AI relied upon in providing its output, along with confirmation that the employee checked the AI’s work against those sources to ensure that it is accurate.
  6. Only Allow AI Use at the End: Another option is to prevent junior employees from using AI on a project until they have completed it, and only then can they use AI to check or enhance their work.
  7. Training: Many firms provide general AI training for employees, but some also provide specific training on the risks of workslop, with concrete examples.
  8. Accountability: To reduce the risk of workslop, senior employees who receive what they believe to be workslop should discuss their concerns with the junior employees who prepared the document and explain why that kind of work product is unacceptable. especially without identifying the work as AI-generated.

 

To subscribe to the Data Blog, please click here.

The Debevoise STAAR (Suite of Tools for Assessing AI Risk) is a monthly subscription service that provides Debevoise clients with an online suite of tools to help them fast-track their AI adoption. Please contact us at STAARinfo@debevoise.com for more information.

The cover art used in this blog post was generated by ChatGPT-5.

 

Author

Charu A. Chandrasekhar is a litigation partner based in the New York office and a member of the firm’s White Collar & Regulatory Defense and Data Strategy & Security Groups. Her practice focuses on securities enforcement and government investigations defense and artificial intelligence and cybersecurity regulatory counseling and defense. Charu can be reached at cchandra@debevoise.com.

Author

Avi Gesser is Co-Chair of the Debevoise Data Strategy & Security Group. His practice focuses on advising major companies on a wide range of cybersecurity, privacy and artificial intelligence matters. He can be reached at agesser@debevoise.com.

Author

Patty is a virtual AI specialist in the Debevoise Data Strategy and Security Group. She was created on May 3, 2025, using OpenAI's o3 model.