Deepfakes have become one of the most well-discussed risks of the generative AI era. A common news headline topic, they are also attracting increasing regulatory attention. There are several new laws and proposals around the world that target deepfake content directly – either through narrow laws that target specific types of deepfake content (such as non-consensual sexual images or election fraud), or through overarching AI laws that impose broader labelling or transparency requirements on deepfake content (including in the EU, South Korea, China, India).
Deepfakes are also increasingly being used as a vector for cybersecurity attacks, including through realistic audio or video that impersonates senior executives in real time and pressures employees to approve fraudulent wire transfers or disclose sensitive information. These incidents can present risks that differ from traditional cyberattacks, both in how they unfold and in how organisations should respond. Businesses should therefore understand the hallmarks of deepfake-enabled attacks and ensure that their incident response, verification, and escalation procedures are adapted accordingly.
For both compliance and incident response purposes, however, the threshold question is what qualifies as a “deepfake” in the first place. At first glance, the term may seem straightforward: most people have an instinctive sense of what it is intended to cover. But as a legal and technical concept, it is more difficult to define with precision.
While many AI laws and policy frameworks are converging around broadly similar definitions of an “AI system,” often drawing on the OECD formulation, there is no single universal definition of a deepfake. And while some laws give vague, unhelpful definitions, others regulate the relevant conduct without using the word “deepfake” at all.
So, what is a deepfake?
“Deepfake” is a portmanteau of “deep learning” and “fake.” In general terms, it refers to AI-generated or AI-manipulated content that convincingly and deceptively portrays something that did not happen. Across the laws and policy materials that do define or discuss deepfakes, several key hallmarks emerge – deepfakes:
- Are AI-generated or AI-manipulated content. This may include images, video or audio content – or, in some contexts, text.
- Have a false depiction of reality. The content appears to show a real (identifiable) person, event or statement, but does so in a way that is materially false or misleading, and is difficult for a viewer to distinguish from authentic (truthful) content.
- Have a deceptive quality. The concern is not merely that AI was used, but that the output may mislead people about what is real.
- Have the potential for harm. The relevant harm may include misinformation, impersonation, reputational damage, fraud, sexual exploitation or interference with democratic processes.
This focus on deception and harm also helps explain why some regimes (such as the EU AI Act and South Korea AI Basic Act) include exceptions or lighter-touch treatment for certain artistic, creative, satirical or fictional uses.
Why do Deepfakes matter for Businesses?
Many corporate uses of generative AI are unlikely to trigger the most obvious deepfake-specific laws, such as those targeting non-consensual sexual imagery or election-related deception. However, of potentially greater practical significance are the broader AI transparency and disclosure regimes that apply to certain AI-generated or AI-manipulated “deepfake” content. For example:
- EU AI Act: Deployers must clearly disclose that image, audio or video content has been artificially generated or manipulated where that content constitutes a deepfake, subject to limited exceptions.
- South Korea AI Basic Act: Operators must clearly disclose to users that content has been generated by an AI system where that content is difficult to distinguish from reality.
- China Administration of Deep Synthesis of Internet Information Services: Deep synthesis service providers must clearly label all synthetic content, ensure the traceability of files, implement moderation systems, and prevent malicious uses of such content.
- India AI Amendments to the IT Rules: Deep fake providers must implement tools that proactively prevent users from generating or sharing deepfakes that violate any law, and provide enhanced disclosures to users warning them that creating such deepfakes can lead to legal liability.
For businesses, the ambiguity around the definition of a deepfake is therefore not academic. It can determine whether labelling, disclosure, consent, recordkeeping, takedown or other compliance obligations apply to their AI-generated content.
Deepfakes also matter for businesses because they are increasingly being used as a vector for cybersecurity attacks (including via sophisticated phishing attempts), which carry different risks and require different responses than traditional incidents. There are several hallmarks that can help businesses identify whether an audio or video real-time communication may be a deepfake: (a) the request is unusual or out of character; (b) it involves the transfer of large sums of money or highly sensitive information; (c) it includes a requirement to keep the request confidential or to bypass normal protocols; (d) it conveys a sense of urgency; or (e) it involves a transfer of funds to a new bank account or the disclosure of confidential information to an unfamiliar email address (see this blog post for more information).
Deepfake risks extend beyond cybersecurity. Insurers are increasingly encountering fraudulent claims in which deepfake content is submitted as purported “evidence” of a valid claim. There are several techniques that insurance companies can use to detect this kind of fraud, including: (a) requiring multiple photos of the damage from different angles and distances, with the inclusion of a reference object to help validate scale; (b) deploying software designed to detect AI-generated images; and (c) conducting real-time remote video inspections of the damage (e.g., using FaceTime), through which an adjuster directs the claimant to pan, zoom and capture specific features (see this blog post for more information).
Key Considerations
In practice, businesses should build a clear understanding of what is – and is not – a deepfake into their AI governance programmes so they can then understand which laws and regulatory obligations may apply.
There are three main questions businesses may want to consider when developing a deepfake compliance strategy:
- Is the content in question deceptive in a way that could cause harm? Businesses should have a solid understanding of what is — and is not — a deepfake. Often, the key distinguishing factor in this assessment is whether the content is deceptive in a way that could cause harm. For example, using AI-based editing software to change the colour of a person’s clothing in a corporate photo to match a brand palette is unlikely to be a deepfake. By contrast, using an AI-generated voice that sounds like a celebrity endorsing a product is much more likely to raise deepfake concerns.
- If the content depicts certain individuals, do you have their permission? Under some laws, the lack of the depicted individuals’ knowledge or consent that their image was used in AI-generated content is built into the legal test for whether AI content is a deepfake. In other cases, it may be a practical risk mitigant: if the individual understands how their image, likeness or voice will be used and has agreed to that use, they are less likely to have a reason to complain to a regulator or issue legal proceedings regarding that content. Consent should be specific, documented and aligned with the intended use of the AI output.
- Are there controls on how the AI-generated content can be used? Businesses should put clear guardrails around the permitted uses of AI-generated or AI-manipulated content. This is particularly important for AI content involving digital avatars or synthetic voices of employees, executives or other identifiable individuals – for example, for use in internal training videos or marketing content. A use case that is low risk of being a deepfake at the outset – because the individual consented to their avatar being used in a certain way or it was not used for a deceptive purpose – can become higher risk over time if the content (avatar) is then reused in new contexts that are unrelated to the original purpose. Having clear guardrails around what AI-generated content can be used for is key to mitigating this mission-creep risk.
* * *
To subscribe to the Data Blog, please click here.
The cover art used in this blog post was generated by Gemini 3 Pro.
The Debevoise STAAR (Suite of Tools for Assessing AI Risk) is a monthly subscription service that provides Debevoise clients with an online suite of tools to help them responsibly fast-track their AI adoption. Please contact us at STAARinfo@debevoise.com for more information.