We recently highlighted the need for companies to manage risks associated with the adoption of AI technology, including the malicious use of real-time deepfakes (i.e., AI-generated audio or video that impersonates a real person). In this article, we address three AI-related insider risks that warrant special attention by corporate compliance departments (i.e., insider deepfakes, barrier evasion, and model manipulation) and present possible ways to mitigate them.

I.   Insider Deepfakes

The ability of rogue employees to use AI tools to create very realistic forged documents, as well as deepfakes, poses new dangers for companies. Many compliance systems are designed to require specific approvals for certain employee actions. For example, actions such as significant payments to new vendors, changes to bank accounts for existing payees, business gifts, and reimbursements for work-related expenses, often require certain internal approvals. Just as external threat actors can use modern technology tools to circumvent these controls, the ability to create fake documents, audio or video will make it much easier for insiders to fabricate compliance with these measures. And while insiders are more likely to be discovered and held responsible for their actions, they also are more likely to have privileged knowledge of security processes and workflows, providing greater opportunities to undermine company procedures.

Some of the defensive practices that we have recommended for external deepfake threats apply equally to insider threats, such as requiring dual authorization for high-risk transactions. In addition, insider deepfake risks can be addressed in a company’s AI and cybersecurity tabletops, as well as Incident Response Plans. Compliance departments may also consider staying abreast of developments in deepfake detection, such as Intel’s recently announced real-time detection software.

But perhaps the most effective way to combat the risks of insider deepfakes is training. Compliance can teach employees that AI technology can now create very convincing fake documents, audio, and videos, and it can be done in real time. Therefore, employees can be sensitized to the fact that any authorization of an unusual expense that is provided in a document, audio, or video could be fraudulent, especially if the action being approved has one of the following hallmarks: (a) it involves the transfer of large sums of money or highly sensitive information, (b) it does not follow normal protocols, or (c) it has an element of urgency. Training can specifically note that employees will not face any adverse action for following company verification protocols when presented with such an authorization, even if the authorization seemingly came from a company executive.

II.   Information Barrier Evasion

Corporate compliance often focuses on preventing improper access to sensitive information. Compliance departments thus apply robust controls in their information technology environments, often by erecting strict “walls” on who can access which information. These restrictions protect against the impermissible disclosure of sensitive information such as material nonpublic information (“MNPI”) and trade secrets.

With AI, gaps in these walls have become easier to discover. Corporate chatbots, for example, can be given access to internal corporate data that employees access by “chatting” with the AI system. The underlying data often includes employee communications, corporate policies and procedures, and swaths of unstructured corporate data. A chatbot given access to sensitive data might divulge it in response to a routine prompt, without the employee even knowing about that possibility.

These capabilities could allow a rogue employee to deliberately extract MNPI or other confidential walled-off information using a chatbot. Such employees might try to cover their tracks by making it appear that they are engaging in a routine interaction with the chatbot, when in fact they are seeking to push the system into revealing information it is meant to keep secret.

One defense is to ensure that existing information walls and permissions are properly applied to the data made available to their AI systems. Implementing zero-trust architecture, including “least-privilege” practice that limits access to employees with genuine need, can also protect sensitive data from AI exploitation. Effectiveness testing, such as through red-teaming (i.e., engaging a team of experts to probe systems for vulnerabilities), can help assess AI information controls, both before and periodically after deployment. Depending on the sensitivity of data exposed to a particular tool, companies might consider implementing risk-based controls to detect and prevent attempted misuse, including automated prompt monitoring and escalation.

III.   Model Manipulation

Lastly, AI systems are being increasingly relied upon to drive vital business processes such as sales and investment models. Rogue employees might try to tamper with these systems by, for example, manipulating algorithms to artificially boost their performance metrics.

Compliance departments may therefore want to consider limiting the ability of a single employee to alter their AI systems. The maker-checker process, for example, requires that a “checker” approve any changes that a “maker” seeks to implement. This dual authorization greatly improves the ability to detect improper changes, as well as inadvertent errors, made to internal systems.

Additionally, compliance departments might consider auditing the content of their highest-risk AI systems and use version control to detect when changes have been made. With such an audit trail, coupled with detailed access logs, any unauthorized changes can be quickly detected, remediated, and investigated.

IV.   Conclusion

The risk of insider threats is too often lost in the discussion of new technologies. AI tools, for all their promise, provide evolving ways for rogue employees to subvert data controls for their own personal benefit. Companies may wish to consider these heightened risks in their AI strategies and include risk mitigation as they plan their control strategy.

V.   Key Takeaways

  • Consider updating trainings for detecting and responding to suspect situations.
  • Consider staying abreast of developments in deepfake use and detection.
  • Consider using stronger authentication measures, such as biometric authentication and encrypted digital signatures, in order to protect against deepfakes in certain circumstances.
  • Consider applying existing information walls and permissions to new AI systems.
  • Consider implementing zero-trust architecture for certain high-risk systems.
  • Consider implementing risk-based controls to detect and prevent attempted misuse of their tools (e.g., chatbots with access to MNPI), including automated monitoring and escalation
  • Consider implementing dual authorization for any changes in AI systems, which will help prevent model manipulation.
  • Consider auditing and version control of information used by high-risk AI systems to better detect and remediate any improper changes to those systems.

***

To subscribe to the Data Blog, please click here.

 The Debevoise Data Portal is an online suite of tools that help our clients quickly assess their federal, state, and international breach notification and substantive cybersecurity obligations. Please contact us at dataportal@debevoise.com for more information.

 The cover art used in this blog post was generated by DALL-E.

Author

Avi Gesser is Co-Chair of the Debevoise Data Strategy & Security Group. His practice focuses on advising major companies on a wide range of cybersecurity, privacy and artificial intelligence matters. He can be reached at agesser@debevoise.com.

Author

Douglas S. Zolkind is a litigation partner based in the New York office and a member of the firm’s White Collar & Regulatory Defense Group. He has extensive trial experience and focuses his practice on white collar criminal defense, government investigations, and internal investigations. He defends corporate and individual clients in criminal and regulatory enforcement matters around the world.

Author

Matthew Kelly is a litigation counsel based in the firm’s New York office and a member of the Data Strategy & Security Group. His practice focuses on advising the firm’s growing number of clients on matters related to AI governance, compliance and risk management, and on data privacy. He can be reached at makelly@debevoise.com

Author

Sarah Wolf is litigation Counsel in the Debevoise White Collar & Regulatory Defense Group. She can be reached at swolf@debevoise.com.

Author

Scott Woods is an associate in the litigation department. He can be reached at sjwoods@debevoise.com

Author

Karen Joo is an associate in the Litigation Department at Debevoise. She can be reached at hjoo@debevoise.com.