On December 19, 2023, the Federal Trade Commission (the “FTC”) announced a complaint and a proposed stipulated order against a large drugstore chain (the “Company”) in connection with the Company’s alleged unfair use of facial recognition technology in retail stores to identify persons who had previously engaged in shoplifting or other wrongful activities (the “Action”). The Action provides a roadmap of the FTC’s expectations for companies using facial recognition and other Artificial Intelligence (“AI”) technologies. The FTC also found that the Company violated a prior consent order by allegedly failing to implement reasonable safeguards and adequately oversee its service providers. As a result, the FTC required the Company to implement a robust information security program that must be overseen by the Company’s board and senior executives.

Background

The FTC has been signaling that AI as well as biometrics have been a priority area for enforcement for some time.

In 2021, the FTC published a blog post on “Truth, Fairness, and Equity in AI” which provided succinct guidance to companies wishing to use AI, including to (1) use complete and representative data sets; (2) test algorithms for discriminatory outcomes; and (3) ensure their AI tool does not cause more harm than good.

In May 2023, the FTC issued a policy statement warning that the use of biometric information raised consumer privacy concerns as well as concerns for the potential for bias and discrimination. The FTC outlined a list of practices it said it would scrutinize to determine whether companies using biometric information technologies were violating Section 5 of the FTC Act, including: failing to assess foreseeable harms to consumers before collecting biometric information; failing to address known risks; engaging in unexpected use of biometric information; failing to evaluate the capabilities of third parties; and failing to provide adequate training for employees who interact with biometric information.

FTC’s Action Against the Company

In the complaint, the FTC alleged that from 2012 until 2020, the Company used facial recognition technology to identify consumers who may have been engaged in shoplifting or other criminal behavior. As a result of these identifications, Company employees allegedly acted against the patrons, including banning them from entering stores, detaining them, and notifying law enforcement. The FTC alleged that, in numerous cases, the facial identification technology was incorrect, subjecting individuals to unjustified harassment. The FTC alleges that this inordinately impacted Black, Asian, Latino, and female consumers. The FTC also alleged that the Company failed to take reasonable measures to prevent the harm associated with the false positives, as they failed to (i) test or assess the biometric tool’s accuracy before deployment, (ii) enforce image quality standards to ensure that the biometric tool functioned accurately, and (iii) train and oversee employees charged with operating the technology in the Company’s stores.

In connection with the Company’s alleged violations, the FTC entered into a stipulated order (the “Order”) that prohibits the Company from using any facial recognition system in any retail store, pharmacy, or online platform for five years. The Order also requires the Company to:

  • delete, and ensure all of their third-party vendors delete, any images or photos collected from the Company’s facial recognition system and delete any algorithms or products developed using such images and photos;
  • notify consumers whenever their biometric information is enrolled in a database used in connection with biometric surveillance or security;
  • investigate and respond to consumer complaints about actions taken against such consumers by biometric security and surveillance systems;
  • provide clear and conspicuous notice about the use of facial recognition or other biometric technology it uses;
  • delete consumer biometric information within five years;
  • implement a robust data security program to protect personal information it shares with vendors;
  • obtain a third-party assessment on the efficacy of the aforementioned data security program; and
  • provide the FTC with an annual certification documenting the Company’s adherence to these requirements.

A New Baseline for Algorithmic Fairness

FTC Commissioner Bedoya issued a written statement accompanying the Action and explained that industry should understand “that this Order is a baseline for what a comprehensive algorithmic fairness program should look like.” Commissioner Bedoya also warned that companies should be aware that if they violate the law when using these systems in the future, the FTC may require violators to accept the appointment of an independent assessor to ensure compliance. As Commissioner Bedoya notes, the Order indicates what the FTC views as key components for such a program, including:

  • notification to consumers of the use of algorithmic tools, and the ability for such consumers to request opt-out, with limited exceptions;
  • notification of when an algorithmic tool acts against a consumer based on the use of such tools, and how a consumer might contest such actions, with limited exceptions;
  • quick response to consumer complaints about the algorithmic tool;
  • testing, including testing for statistically significant bias on the basis of race, ethnicity, gender, sex, age, or disability—acting alone or in combination;
  • technical assessments to ensure the algorithmic tool works appropriately and to determine any risks, including how inaccuracies may arise from training data, hardware issues, software issues, and differences between training and deployment environments;
  • ongoing annual testing of the tool “under conditions that materially replicate” conditions in which the tool is deployed to ensure it continues to act as intended and training of employees with respect to the use of algorithm tools; and
  • providing a procedure and mechanism for shutting down the tool or system if the risks identified through assessment and testing cannot be addressed.

The Action demonstrates that regulators do not need new powers to bring AI enforcement actions. The FTC pursued the Company under the unfairness prong of Section 5 of the FTC Act. This provides a precedent for other regulators with fairness authority (such as insurance commissioners, state attorneys general, and the Consumer Financial Protection Bureau) to do the same.

Takeaways

  • Identify and Inventory: In order to benchmark against the FTC’s framework for AI compliance, legal departments (or AI committees) should be made aware of when the business is considering adopting AI tools that may cause harm to consumers, such as facial-recognition tools like those used by the Company.
  • Risk Assessments: Companies considering facial-recognition technology or other AI tools that may cause harm to consumers should consider adopting or revising their risk assessment protocols to include the topics identified by the FTC in the Action, such as consideration of the consequences for consumers of inaccurate outputs and potential bias.
  • Transparency: These companies should also consider adopting or updating notices for individuals who may be impacted by any decision-making supported by the tool.
  • Data Minimization: As we have written previously, AI poses challenges for data minimization. Companies considering implementing facial-recognition technology or other AI tools should consider developing a written retention schedule for each type of data used to identify categories of high-risk consumers associated with the program.
  • Testing: Companies should also consider developing testing protocols to assess the accuracy of determinations made by the tool under real-life conditions, and repeat such testing periodically.
  • Training: Companies should also consider developing clear policies for users of these technologies and conduct training for such individuals on a regular basis.

***

To subscribe to the Data Blog, please click here.

The Debevoise Artificial Intelligence Regulatory Tracker (DART) is now available for clients to help them quickly assess and comply with their current and anticipated AI-related legal obligations, including municipal, state, federal, and international requirements.

Author

Avi Gesser is Co-Chair of the Debevoise Data Strategy & Security Group. His practice focuses on advising major companies on a wide range of cybersecurity, privacy and artificial intelligence matters. He can be reached at agesser@debevoise.com.

Author

Johanna Skrzypczyk (pronounced “Scrip-zik”) is a counsel in the Data Strategy and Security practice of Debevoise & Plimpton LLP. Her practice focuses on advising AI matters and privacy-oriented work, particularly related to the California Consumer Privacy Act. She can be reached at jnskrzypczyk@debevoise.com.

Author

Michael R. Roberts is a senior associate in Debevoise & Plimpton’s global Data Strategy and Security Group and a member of the firm’s Litigation Department. His practice focuses on privacy, cybersecurity, data protection and emerging technology matters. He can be reached at mrroberts@debevoise.com.

Author

Jarrett Lewis is an associate and a member of the Data Strategy and Security Group. He can be reached at jxlewis@debevoise.com.

Author

Melissa Muse is an associate in the Litigation Department based in the New York office. She is a member of the firm’s Data Strategy & Security Group, and the Intellectual Property practice. She can be reached at mmuse@debevoise.com.