The following scenario is no longer science fiction: An employee receives an email from the CEO asking her to join a video call. The CEO directs the employee to send confidential documents to a third party. The request is unusual, but the employee saw the CEO with her own eyes, so she complies. It turns out, however, that it was a real-time deepfake and not the real CEO who gave the instructions on the video call.

We’ve previously written about business email compromise (“BEC”) and wire transfer fraud scams, and the various measures that companies can implement to reduce the associated risks. But in light of recent developments in deepfake technologies, and their increasing use as part of BECs, companies should consider revisiting their BEC mitigation strategies because some existing BEC policies may no longer be sufficient to address these emerging threats.

Deepfake technology has reached the stage where attackers are now starting to generate convincing fake audio and video in real time. If an employee genuinely (although mistakenly) believes that they are being instructed by the CEO to take a specific action (such as wire a large sum of money out of the company), they are likely to do it, even if that action is contrary to company policy. Therefore, addressing the risks of deepfake-enhanced BECs is mainly a training issue, not a policy issue, and companies should consider providing specific training on deepfake risks to employees who are responsible for transferring large sums of money or issuing new passwords for confidential accounts.

A BEC scam often involves a threat actor, using a fake or a hacked email, to impersonate a company executive or a vendor who is asking for a payment to be made, and thereby tricking a company employee into sending money to an account that the threat actor controls. The targeted employee often works in the company’s finance or accounting departments (or the IT helpdesk, where the goal is initial access, such as through a password reset to a new device).

Traditionally, BEC risks are mitigated by having policies that require independent verification. For example, employees who receive instructions to make a payment over a certain threshold (e.g., $10,000) to a new bank account may be required by policy to confirm that the instruction is legitimate by calling the person making the request (or a previously identified officer of the requesting company) using a phone number that can be independently verified as authentic (i.e., not using the phone number provided in the payment request). Some SEC registered broker-dealers and investment advisers include such measures as part of their Regulation S-ID policies.

But these policies are sometimes not effective in stopping deepfake-enhanced BECs. Suppose an employee who is responsible for making wire payments receives a text message from the CFO asking the employee to join a video call, and sending a link for the call. The person on the call looks and sounds exactly like the CFO, and tells the employee that there is a fast-moving and highly confidential company transaction that is about to get signed. Because of insider-trading risks, the employee is told to keep the transaction strictly confidential. To facilitate the transaction, the employee is instructed to immediately wire $3 million to the account of an investment bank that is allegedly involved in the transaction, but is not a usual company vendor. If employees believe that they are actually talking with the CFO, then they are unlikely under those circumstances to follow a company policy that requires them to call the CFO at a verified office number to confirm that the request is legitimate before executing it.

To address this kind of deepfake risk, companies should consider implementing the following additional measures:

Training: Employees who transfer large sums of money, or send out confidential information, or who handle HelpDesk access requests, should be made aware that deepfake technology can now create very convincing fake audio and videos in real time. Therefore, any request made by audio or video could be fraudulent, especially if it has one of the following hallmarks: (a) it is unusual, (b) it involves the transfer of large sums of money or highly sensitive information, (c) it includes a requirement to keep the request confidential or not to follow normal protocols, (d) it has an element of urgency, or (e) it involves a transfer of funds to a new bank account or confidential information to an unfamiliar email address. Training should specifically note that employees will not face any adverse action for following company verification protocols when presented with such a request, and that failing to follow verification protocols, even at the request of the CEO, could result in discipline. Training could also include ways to detect real-time video deepfakes, like asking the person to turn to the side, because the technology is not very good at generating realistic profile views.

Enhanced Verification: Any request with hallmarks identified above should be independently verified by one or more of the following means:

  • In-person verification or calling the person making the request at a phone number that can be independently verified as authentic.
  • Requiring the requester to display two pieces of identification, at least one with a photo, during the live video call.
  • Requiring the requester to provide a predetermined code word or response to a challenge question that the company has implemented for authentication of such requests.

Two-Person Approval: A second person should be required to sign off on certain categories of high-risk requests made by voicemail, email, text, audio or video.

If a business believes that it is likely the victim of a BEC, it should immediately (1) contact the banks that sent and received the money to alert them about the fraud, and (2) submit the relevant information to https://bec.ic3.gov/ in the U.S. to trigger FBI efforts to start the financial fraud kill chain (a process that the FBI has successfully used many times to freeze and recover fraudulently transferred funds).

***

To subscribe to the Data Blog, please click here.

 The Debevoise Data Portal is an online suite of tools that help our clients quickly assess their federal, state, and international breach notification and substantive cybersecurity obligations. Please contact us at dataportal@debevoise.com for more information.

 The cover art used in this blog post was generated by DALL-E.

Author

Charu A. Chandrasekhar is a litigation partner based in the New York office and a member of the firm’s White Collar & Regulatory Defense and Data Strategy & Security Groups. Her practice focuses on securities enforcement and government investigations defense and cybersecurity regulatory counseling and defense.

Author

Luke Dembosky is a Debevoise litigation partner based in the firm’s Washington, D.C. office. He is Co-Chair of the firm’s Data Strategy & Security practice and a member of the White Collar & Regulatory Defense Group. His practice focuses on cybersecurity incident preparation and response, internal investigations, civil litigation and regulatory defense, as well as national security issues. He can be reached at ldembosky@debevoise.com.

Author

Avi Gesser is Co-Chair of the Debevoise Data Strategy & Security Group. His practice focuses on advising major companies on a wide range of cybersecurity, privacy and artificial intelligence matters. He can be reached at agesser@debevoise.com.

Author

Erez is a litigation partner and a member of the Debevoise Data Strategy & Security Group. His practice focuses on advising major businesses on a wide range of complex, high-impact cyber-incident response matters and on data-related regulatory requirements. Erez can be reached at eliebermann@debevoise.com

Author

Matthew Kelly is a litigation counsel based in the firm’s New York office and a member of the Data Strategy & Security Group. His practice focuses on advising the firm’s growing number of clients on matters related to AI governance, compliance and risk management, and on data privacy. He can be reached at makelly@debevoise.com

Author

Karen Joo is an associate in the Litigation Department at Debevoise. She can be reached at hjoo@debevoise.com.