Our top-five European data protection developments from February are:

  • European Commission publishes guidelines on prohibited AI practices: The EU Commission has published non-binding guidance on the EU AI Act’s prohibited use cases.
  • European Parliamentary Research Service Report Highlights Tension Between the EU AI Act and GDPR: The ERPS published a report warning of a potential conflict between the EU AI Act and the GDPR on the processing of special categories of data to prevent algorithmic bias.
  • Spanish Telecomm provider fined €1.2 million: The Spanish DPA fined Orange Espagne for failing to prevent the issuance of a duplicate SIM to fraudsters who used the SIM to gain access to customer bank accounts.
  • ICO guidance on management of employment records: The ICO published guidance for employers when managing employee personal data.
  • CJEU clarifies automated decision making rules and data subject rights: Businesses relying on decisions based solely on automated decision making must generally be able to provide individuals with a clear and meaningful explanation of how their data was used to enable them to understand and challenge any associated decision.

These developments are covered below.

European Commission Publishes Guidelines on Prohibited Artificial Intelligence (“AI”) Practices Under the EU AI Act

What happened: The EU Commission recently published non-binding guidelines on the scope and application of  “prohibited AI practices” under the EU AI Act, including social scoring and real-time remote biometric identification, as discussed in our blog post. The ban entered into force on 2 February 2025.

The guidelines provide extensive detail on AI practices that are considered to present an “unacceptable risk” and include a range of practical-use cases to assist stakeholders in understanding and complying with the EU AI Act’s requirements. For example, in relation to the prohibition on certain AI-powered social scoring practices, the guidelines clarify inter alia that: (i) the AI system does not need to be the sole cause of the detrimental or unfavourable treatment but must play a sufficiently important role in producing the social score; and (ii) the prohibition applies even if the score is produced by a different organisation from the one that ultimately uses it. Moreover, in relation to the prohibition on AI systems which deploy deceptive techniques, the guidelines confirm that there is no requirement for a human intent to deceive.

What to do: Businesses who use or distribute AI may wish to consider these guidelines when reviewing their compliance with the EU AI Act’s prohibitions. However, although persuasive, the Guidelines are not authoritative and the EU AI Act’s application is likely to be refined by its practical implementation and the courts in the future.

European Parliamentary Research Service (the “EPRS”) Report Highlights Tension Between EU AI Act and GDPR

What happened: The EPRS published a report noting points of tension between the EU AI Act and the GDPR. The EU AI Act aims to promote human-centric, trustworthy and sustainable AI while respecting individuals’ fundamental rights and freedoms. When certain conditions are met, Art. 10(5) allows for the “exceptional” processing of special category data (as defined under the GDPR Art. 9) to enable bias detection and corrective action in high-risk AI systems (including, inter alia, AI systems used for hiring, establishing credit scores, and certain law enforcement activities).

In contrast, the GDPR imposes more restrictive requirements on the processing of special category data, requiring explicit legal grounds such as individual consent or acting in the public interest. Therefore, although providers of AI systems may need to process special category data to ensure their systems are not treating those characteristics in a biased manner, the  GDPR restricts processing. The tension between the two regulations may mean that it may be more burdensome to protect high-risk AI systems against bias for characteristics which are subject to additional protection under the GDPR (e.g., religion, sexual orientation), but not others (e.g., age, gender).

The report suggests that this legal uncertainty may need to be addressed through GDPR reform or further guidance on the relationship between the GDPR and EU AI Act.

What to do: Businesses may wish to consider the wider regulatory environment (including the GDPR and other data protection legislation) when designing their AI governance policies. In particular, businesses that intend to use AI systems that are likely to process special category data may wish to document their own assessments of the lawfulness of processing.

Spanish Telecomm Provider Fined €1.2 Million for Failure to Prevent SIM-Swapping Fraud Incident

What happened: The Spanish Data Protection Authority (the “AEPD”) fined Orange Espagne (“Orange”) 1.2 million for GDPR violations related to a SIM-swapping fraud incident.

The AEPD was alerted to the incident when a data subject lodged a complaint that an unauthorised duplicate SIM card was issued without their consent, enabling fraudsters to steal €9,000 from their bank account. The SIM-swapping scheme involved a third party impersonating the data subject and requesting a duplicate SIM card from the victim’s mobile phone provider to gain access to the victim’s bank accounts. The AEPD held that Orange had failed to implement appropriate safeguards, resulting in two key GDPR breaches:

  1. Lack of consent for SIM issuance: The issuance of the duplicate SIM involved processing the customer’s personal data without consent and therefore there was no valid lawful basis.
  2. Failure in data protection by design and default: Orange’s lack of robust identity verification processes compromised customer data security and violated the GDPR’s data protection by design and default

The AEPD previously fined Orange for similar GDPR violations arising out of consumer identity theft, highlighting the continued importance of implementing stringent identity verification procedures in the telecommunications industry to prevent unauthorised access to customer data.

What to do: Telecommunication providers and businesses handling sensitive customer data may wish to review their authentication and fraud prevention measures to ensure compliance with GDPR standards and to effectively combat impersonation attempts. In particular, businesses may wish to examine internal security controls and improve staff training on fraud prevention.

The Information Commissioner’s Office (the “ICO”) Publishes Guidance for Employers on the Management of Employment Records under UK Data Protection Laws

What happened: The ICO published guidance for employers on the management of employment records in compliance with the UK GDPR and the Data Protection Act 2018. The guidance sets out the key legal obligations under UK data protection laws and suggests practical steps employers can take to meet them.

The guidance outlines best practice for handling employment records, including: (a) taking all reasonable steps to keep records accurate and up to date; (b) retaining employment records for only as long as necessary and erasing or anonymising records when they are no longer needed; (c) putting appropriate security measures in place to prevent the records being compromised; and (d) informing workers about what data is collected and why through privacy notices and providing workers with access to their records upon request.

The guidance also outlines how to interpret certain requirements against the specific backdrop of an employment relationship, for example: 

  • Given the power dynamics in an employment context, the guidance clarifies that if it will be difficult to show that consent has been freely given, companies should consider relying on another lawful basis, such as legitimate interests.
  • The guidance outlines what lawful bases are most likely to be relevant in an employment records context, and the conditions that are likely most applicable to employers, to enable them to process special category data under UK data protection laws. For example, an employer may be able to rely on legal obligation as a lawful basis where workers’ names and addresses are shared with HMRC for tax purposes.

What to do: Businesses may wish to (re)assess their handling of employment records to ensure that their practices and policies align with the ICO’s latest expectations. In particular, businesses may consider reviewing their internal documentation of processing activities, to ensure that they are relying on the correct lawful bases, and to reflect any changes on employee privacy policies, as necessary.

The CJEU Rules on Decision Making Explainability

What happened: The CJEU upheld a decision of the Austrian courts and ruled that where an automated decision has been made using a data subject’s personal information with a view to obtaining a specific result, such as creating a credit score, a data controller must be able to describe: (i) the procedure and principles applied to reach a decision in a way that enables the data subject to understand and/or challenge how their data has been used; and (ii) how, in fact, the information has been used in the automated decision making process.

The ruling concerns a case brought in Austria against a mobile phone operator for denying a contract based on an automated credit assessment. The mobile operator subsequently failed to provide sufficient reasoning or details concerning the decision-making process citing, inter alia, the need to protect trade secrets. The Austrian court ruled that this violated the GDPR rules, namely a right to meaningful information regarding the logic involved in automated decision making, leading to a referral to the CJEU for clarification.

The CJEU ruled that the main purpose of data subject access rights under the GDPR is to enable individuals to express opinions or contest a decision based on automated processing. Where an automated decision has been made using a data subject’s personal information in certain circumstances, businesses must be able to provide “meaningful information” in a concise, transparent and intelligible manner, in an easily accessible form which should enable individuals to understand and challenge any associated decision.

Disclosing the algorithm used in making an automated decision is insufficient; explanations should clarify how variations in personal data might alter outcomes. If trade secrets or third-party data are in some way involved in the decision-making process, businesses are asked to submit this information to a relevant competent authority or else the court, who will ensure a fair balance between transparency and business confidentiality.

What to do: Businesses using automated decision-making systems might wish to review their data subject access procedures, including considering what information they are able to provide to relevant data subjects to explain how their data was used in the decision-making process, in a clear and meaningful way.

***

 

To subscribe to the Data Blog, please click here.

The cover art used in this blog post was generated by DALL-E.

Author

Robert Maddox is International Counsel and a member of Debevoise & Plimpton LLP’s Data Strategy & Security practice and White Collar & Regulatory Defense Group in London. His work focuses on cybersecurity incident preparation and response, data protection and strategy, internal investigations, compliance reviews, and regulatory defense. In 2021, Robert was named to Global Data Review’s “40 Under 40”. He is described as “a rising star” in cyber law by The Legal 500 US (2022). He can be reached at rmaddox@debevoise.com.

Author

Michiko Wongso is an associate in the firm’s Data Strategy & Security Group. She can be reached at mwongso@debevoise.com

Author

Samuel Thomson is a trainee associate in the Debevoise London office.

Author

Jeevika Bali is a trainee associate in the Debevoise London office.

Author

Christina Newall is a trainee associate in London office. She can be reached at cnewall@debevoise.com.

Author

Olivia Halderthay is a trainee associate in the Debevoise London office.

Author

Mia Hermann is a trainee associate in London office. She can be reached at mhermann@debevoise.com.

Author

Dominic O'Leary is a trainee associate in London office. He can be reached at doleary@debevoise.com.