As businesses adopt Generative AI tools, they need to ensure that their governance frameworks address not only AI-specific regulations such as the forthcoming EU AI Act, but also existing regulations, including the EU and UK GDPR.

In this blog post, we outline eight questions businesses may want to ask when developing or adopting new Generative AI tools or when considering new use cases involving GDPR-covered data. At their core, they highlight the importance of integrating privacy-by-design default principles into Generative AI development and use cases (see here).

If privacy is dealt with as an afterthought, it may be difficult to retrofit controls that are sufficient to mitigate privacy-related risk and ensure compliance. Accordingly, businesses may want to involve privacy representatives in any AI governance committees. In addition, businesses that are developing their own AI tools may want to consider identifying opportunities to involve privacy experts in the early stages of Generative AI development planning.

1. Does GDPR apply?

GDPR applies to personal data that is processed by an EEA or UK-established business or a foreign business in the context of its offering goods or services to, or monitoring the behaviour of, individuals in the EEA or UK. The determination whether personal data is subject to GDPR is a multifaceted, fact specific assessment. A foreign business not directly subject to GDPR may nevertheless be required to comply with GDPR restrictions if they have entered Standard Contractual Clauses or signed-up to the EU-US Data Protection Framework to facilitate the receipt of personal data.

What can you do?

  • Consider whether any personal data you intend to use for Generative AI training, testing or prompting is subject to GDPR (or equivalent protections).
  • If you intend to use personal data subject to GDPR, consider whether you can sidestep GDPR’s requirements by anonymising the data before use.

2. Do you have a lawful basis to use personal data in your Generative AI tool?

Businesses subject to GDPR must have a lawful basis to process personal data – including when using personal data to train, test, operate, and prompt Generative AI models. Potentially applicable lawful bases include consent, legitimate interest, and contractual necessity, depending on the nature of the personal data and use case.

Importantly, if you have a lawful basis to process personal data for one purpose (e.g., to provide medical insurance coverage under a purchased policy) it does not necessarily mean that you also have a lawful basis to use the same personal data for a different purpose (e.g., to train your Generative AI model to identify insured individuals who are at high risk of developing cancer).

What can you do?

  • Consider and document your lawful basis before using personal data, including by updating your Record of Processing Activities.
  • When building training sets, consider and document the process for identifying and, if necessary, excluding personal data to ensure the process is sufficiently robust and personal data is not accidentally used by development teams.
  • When using personal data for testing, operating, or prompting, it may be appropriate to implement controls including prohibiting the use of special category personal data entirely, or requiring users to identify a lawful basis for using the personal data on a use case-by-use case, or prompt-by-prompt, basis, in advance. For most Generative AI tools, it may be difficult to identify a single lawful basis applicable to all potential use cases.

3. How will you ensure you deal with personal data transparently?

Transparency is a core principle of GDPR, and has been a primary regulatory concern in the context of Generative AI (see, e.g., this Hungarian enforcement decision). Privacy notices are central to meeting transparency obligations. GDPR requires businesses to provide individuals with a privacy notice when collecting data from individuals or using data provided by a third party unless certain exemptions apply. Privacy notices must include certain information, including the purpose of the processing and the lawful basis for processing. Access rights of individuals can also go to the source of the data and, in certain cases, to the logic involved in the application of the tool.

What can you do?

  • Consider reviewing customer and employee privacy notices, as applicable, to determine whether they need updating to reflect when and how personal data will be used in connection with Generative AI tools.
  • Especially where the use of Generative AI tools may constitute employee monitoring (discussed below), consider if it may be necessary to more overtly flag the use of such tools as part of workflow processes.

4. How will you deal with individual rights requests for personal data used in training sets?

GDPR gives individuals certain rights over their personal data, including the right to request data to be deleted and the right to request the correction of inaccurate data. Once a model has been trained or fine-tuned using personal data, there may be limits on the ability to have the model “unlearn” that information.

What can you do?

  • Consider risks associated with individuals exercising data subject rights at an early stage of Generative AI development and game plan responses to such requests.
  • Whether or not personal data is used, consider maintaining a detailed inventory of data used in training or operating sets to assist in responding to individual and regulatory questions related to the exercise of such individual rights.

5. Will the Generative AI tool be used to make automated decisions?

Generative AI tools can be integrated with new and existing businesses processes to assist in making decisions. Under GDPR, individuals have the right not to be subject to a decision based solely on automated processing in certain circumstances, including decisions that produce certain legal effects concerning the individual or similarly significantly affects the individual (e.g., certain e-recruiting practices without any human intervention. There are similar limits on certain kinds of AI decisions under U.S. laws).

What can you do?

  • Consider developing processes as part of use-case evaluations to: (i) identify when automated decision-making issues, including the risk of discrimination against individuals, may arise; and (ii) determine what, if any, mitigating measures are needed to manage associated risks.
  • When Generative AI is used in automated decision-making processes, consider obligations under GDPR, including potential need for human involvement in the process or meaningful information about the logic involved, and document any necessary compliance controls.

6. Will the Generative AI tool be used to create new personal data?

Generative AI tools can produce new data that, in itself, also constitutes personal data – for example, if a Generative AI tool is used to summarise an individual’s CV or infer their race from other data. Creating personal data may give rise to privacy concerns if this generated data is not dealt with in accordance with GDPR, even if the created personal data is not accurate.

What can you do?

  • Consider the circumstances in which a Generative AI tool may be used to generate personal data, the lawful basis for generating such data, and whether Generative AI processes or existing policies regarding the handling of personal data need to be revised in response to this possibility.
  • If further personal data is re-processed, also consider the lawful basis for subsequent processing of that “net new” personal data, and ensure it is accounted for when dealing with individual rights requests.

7. Does using the Generative AI tool constitute employee monitoring?

Certain proprietary Generative AI tools that are designed to integrate with email, conference call solutions, or other software may constitute employee monitoring if they involve the direct or indirect monitoring or tracking of employee behavior. Employee monitoring is a key focus for European privacy and labour regulators, and raises various privacy issues.

What can you do?

  • Implement processes for Generative AI tool and use-case evaluation that assist in identifying when employee privacy considerations under GDPR and national data protection laws may arise.
  • Consider whether, in addition data protection laws, further employment law considerations may apply (g., German laws calling for Works Council approval in case of contemplated employee monitoring).
  • Where a Generative AI tool will involve monitoring employee behaviour, consider appropriate controls to limit when and how employee data is collected and used, and whether it is necessary to implement transparency disclosures about the use of Generative AI in business processes.

8. Do you need to carry out a DPIA?

GPDR requires businesses to conduct a Data Protection Impact Assessment (DPIA) for processing that is likely to result in a high risk to the rights and freedoms of individuals. Whether this threshold is met will need to be determined on a case-by-case basis. In the ICO’s view: “In the vast majority of cases, the use of AI will involve a type of processing likely to result in a high risk to individuals’ rights and freedoms, and will therefore trigger the legal requirement for you to undertake a DPIA.” Businesses should also be mindful that many EU Supervisory Authorities, including Ireland, have list of circumstances in which DPIAs must be performed, many of which would include significant generative AI applications.

What can you do?

  • Consider whether it is necessary to conduct a DPIA before using personal data to train, test, operate, or prompt Generative AI tools. See the CNIL and ICO’s guidance on DPIAs for AI tools for more information on relevant considerations.
  • Consider how to integrate the process for preparing DPIA’s with broader Generative AI governance arrangements to ensure risk assessments and mitigations are consistent and comprehensive.

 

To subscribe to the Data Blog, please click here.

The cover art used in this blog post was generated by DALL-E.

Author

Avi Gesser is Co-Chair of the Debevoise Data Strategy & Security Group. His practice focuses on advising major companies on a wide range of cybersecurity, privacy and artificial intelligence matters. He can be reached at agesser@debevoise.com.

Author

Robert Maddox is International Counsel and a member of Debevoise & Plimpton LLP’s Data Strategy & Security practice and White Collar & Regulatory Defense Group in London. His work focuses on cybersecurity incident preparation and response, data protection and strategy, internal investigations, compliance reviews, and regulatory defense. In 2021, Robert was named to Global Data Review’s “40 Under 40”. He is described as “a rising star” in cyber law by The Legal 500 US (2022). He can be reached at rmaddox@debevoise.com.

Author

Dr. Friedrich Popp is an international counsel in the Frankfurt office and a member of the firm’s Litigation Department. His practice focuses on arbitration, litigation, internal investigations, corporate law, data protection and anti-money laundering. In addition, he is experienced in Mergers & Acquisitions, private equity, banking and capital markets and has published various articles on banking law.

Author

Martha Hirst is an associate in Debevoise's Litigation Department based in the London office. She is a member of the firm’s White Collar & Regulatory Defense Group, and the Data Strategy & Security practice. She can be reached at mhirst@debevoise.com.