As we wrote in previous posts, on August 11, 2022, the Federal Trade Commission (the “FTC”) announced its Advance Notice of Proposed Rulemaking (the “ANPR”) seeking public comment on 95 questions focused on harms stemming from “commercial surveillance and lax data security practices” and whether new trade regulation rules under section 18 of the FTC Act are needed to protect people’s privacy and information.

In Part 1 of this Data Blog series, we provided an overview of the ANPR and the context for the FTC’s rulemaking process. In Part 2 and Part 3, we discussed how the privacy-focused and data security-focused components of the ANPR may offer actionable takeaways for businesses.

In this Part 4, we explore how the FTC ANPR addresses AI, algorithms, and discrimination, as well as steps that businesses should consider to enhance their AI governance and compliance programs.

Key Topics Addressed by the ANPR Related to AI, Algorithms, and Discrimination

As indicated by the ANPR’s questions, the FTC is seeking comment on how new trade regulation rules could address potential harms and discriminatory outcomes related to the use of AI, automated decision-making (“ADM”) systems, and algorithms. Specifically, the FTC is focused on evaluating whether, and the extent to which, new rules should:

  1. Impose data minimization or purpose limitation requirements, considering the potential effects such requirements may have on ADM or other algorithmic learning-based processes or techniques, which often require a large set of personal data for training or operations;
  2. Address potential harms associated with algorithmic error in ADM, including through evaluation and/or certification requirements concerning the accuracy, validity, reliability, or error of businesses’ ADM practices and whether businesses’ commercial surveillance practices are in accordance with their own published business policies;
  3. Consider how to ensure that firms’ ADM practices (including, for example, their use of natural language processing technologies) better protect non-English speaking communities from fraud and abusive data practices;
  4. Forbid or limit the development, design, and use of ADM systems that generate or otherwise facilitate outcomes that violate Section 5 of the FTC Act;
  5. Limit the use of ADM systems related to how businesses personalize or deliver targeted advertisements;
  6. Bar or otherwise limit the deployment of AI systems that produce discrimination, irrespective of the data or processes upon which those outcomes are based;
  7. Measure, evaluate, and analyze disparate outcomes, discrimination based on proxies for protected categories, and discrimination when more than one protected category is implicated (e.g., pregnant veterans or Black women);
  8. Focus on harms based on protected classes and/or consider harms to other underserved groups that current law does not recognize as protected from discrimination (e.g., unhoused people or residents of rural communities);
  9. Regulate areas that Congress has already explicitly legislated (g., housing, employment, labor, and consumer finance), or address all sectors;
  10. Rely on the FTC’s unfairness authority under Section 5 of the FTC Act or relate to the antidiscrimination doctrine applied in other sectors (g., state regulations related to insurance or employment, as well as alternative data sets and “big data”) or by federal statutes; and
  11. Enumerate specific forms of relief or damages that are not explicit in the FTC Act but that the FTC asserts are within its authority (g., algorithmic disgorgement or “model destruction,” a remedy the FTC has required in multiple enforcement actions that involved children’s data, health data, and other sensitive data).

The FTC’s Focus on AI, Algorithms, and Discrimination

Over the past decade, key U.S. regulators, including the FTC, have continued to develop and enhance their understanding of, and views regarding, AI, algorithms, as well as the discriminatory practices that might result from misuse of these technologies. The FTC, in particular, has also applied controversial new remedies like algorithmic disgorgement to address consumer harms associated with these technologies. The FTC enforces an array of laws applicable to developers and users of AI, including Section 5 of the FTC Act, the Fair Credit Reporting Act (the “FCRA”), and the Equal Credit Opportunity Act (the “ECOA”).

Over the past few years, the FTC has published a series of blog posts providing guidance on various AI topics. The posts also explain that the FTC’s enforcement actions, studies, and guidance underscore that the use of AI tools should foster accountability and be transparent, explainable, fair, and empirically sound. These blog posts include Using Artificial Intelligence and Algorithms (April 8, 2020), and Aiming for truth, fairness, and equity in your company’s use of AI (April 19, 2021).

Mitigation Strategies to Enhance AI Compliance and Reduce Risk

As we have noted throughout this Data Blog series, while the promulgation of new trade regulation rules related to the ANPR is likely several years away (if the rulemaking process proceeds at all), businesses can consider the ANPR as a potential roadmap for risk-mitigation strategies. Although every company is different, the strategies a company should consider to improve their AI compliance and governance program (the “Program”), and reduce regulatory and reputational risk, are listed below, many of which the FTC has underscored in its guidance.

  1. Scoping: Determine what kinds of models, algorithms and datasets are covered by the Program, and what is not covered (and why).
  2. Inventory: For each model or algorithm covered by the Program, ensure that sufficient information is collected for internal risk rating.
  3. Guiding Principles: Create a high-level set of guiding principles for the Program, such as accountability, fairness, privacy, reliability, and transparency.
  4. Code of Conduct: Draft an employee-facing code of conduct intended to operationalize the company’s AI Guiding Principles.
  5. Governance Committee: Establish a cross-functional committee that oversees the Program or other means for establishing overall accountability for the Program, as well as individual responsibility for specific high-risk models.
  6. Risk Factors and Assessments: Create a list of risk factors to use to classify covered models, and consider worst-case scenarios for establishing high-risk models.
  7. Risk Mitigation Measures: Establish a list of steps that the Governance Committee can recommend to reduce the risks associated with certain high-risk models, including bias testing and additional human oversight, as appropriate.
  8. Privacy Compliance: Make sure that the company is meeting its privacy obligations with respect to data collection, use, and processing, and not misleading consumers as to what it is doing and not doing with their personal data.
  9. Prohibited and Suspect Inputs: Have a list of inputs for models that the company prohibits, or that are considered suspect and require justification or vetting.
  10. Policy Updates: Update critical policies with AI implications, including with respect to privacy, data governance, model risk management, and cybersecurity.
  11. Training: Provide training for individuals involved in the Program on AI legal and reputational risks.
  12. Incident Response: Create a plan for responding to an allegation of bias or other deficiency in the Program, and conduct an AI incident tabletop exercise to test the plan.
  13. Public Statements: Review the company’s public statements relating to AI to ensure their accuracy.
  14. Whistleblower Policies: Revise the company’s whistleblower policies to account for AI-related complaints and investigations.
  15. Transparency: Periodically review disclosures to consumers about decisions being made by company models that may significantly impact them, to ensure accuracy and regulatory compliance, including providing information about data considered in the decision as well as potential appeal and opt-out rights.
  16. Explainability: Periodically review models impacting consumers to ensure that the decisions meet legal requirements for explanations as to which factors were most influential in the model’s decisions.
  17. Auditability and Documentation: Ensure that documentation maintained regarding model training and operations meets regulatory expectations.
  18. Vendor Risk Management: Review vendor policies to ensure that AI provided by third parties has been subjected to appropriate diligence and contractual provisions.
  19. Board Oversight: Report to the Board on the Program as appropriate.

 

To subscribe to the Data Blog, please click here.

The authors would like to thank former Law Clerk Lily Coad for her work on this Debevoise Data Blog.

Author

Avi Gesser is Co-Chair of the Debevoise Data Strategy & Security Group. His practice focuses on advising major companies on a wide range of cybersecurity, privacy and artificial intelligence matters. He can be reached at agesser@debevoise.com.

Author

Erez is a litigation partner and a member of the Debevoise Data Strategy & Security Group. His practice focuses on advising major businesses on a wide range of complex, high-impact cyber-incident response matters and on data-related regulatory requirements. Erez can be reached at eliebermann@debevoise.com

Author

Paul D. Rubin is a corporate partner based in the Washington, D.C. office and is the Co-Chair of the firm’s Healthcare & Life Sciences Group and the Chair of the FDA Regulatory practice. His practice focuses on FDA/FTC regulatory matters. He can be reached at pdrubin@debevoise.com.

Author

Johanna Skrzypczyk (pronounced “Scrip-zik”) is a counsel in the Data Strategy and Security practice of Debevoise & Plimpton LLP. Her practice focuses on advising AI matters and privacy-oriented work, particularly related to the California Consumer Privacy Act. She can be reached at jnskrzypczyk@debevoise.com.

Author

Anna R. Gressel is an associate and a member of the firm’s Data Strategy & Security Group and its FinTech and Technology practices. Her practice focuses on representing clients in regulatory investigations, supervisory examinations, and civil litigation related to artificial intelligence and other emerging technologies. Ms. Gressel has a deep knowledge of regulations, supervisory expectations, and industry best practices with respect to AI governance and compliance. She regularly advises boards and senior legal executives on governance, risk, and liability issues relating to AI, privacy, and data governance. She can be reached at argressel@debevoise.com.

Author

Michael R. Roberts is a senior associate in Debevoise & Plimpton’s global Data Strategy and Security Group and a member of the firm’s Litigation Department. His practice focuses on privacy, cybersecurity, data protection and emerging technology matters. He can be reached at mrroberts@debevoise.com.

Author

Melissa Muse is an associate in the Litigation Department based in the New York office. She is a member of the firm’s Data Strategy & Security Group, and the Intellectual Property practice. She can be reached at mmuse@debevoise.com.

Author

Melissa Runsten is a corporate associate and a member of the Healthcare & Life Sciences Group. Her practice focuses on FDA/FTC regulatory matters and includes the representation of drug, device, food, cosmetic and other consumer product companies. She can be reached at mrunsten@debevoise.com.