Over the last week, the Consumer Financial Protection Bureau (“CFPB”) and the Office of the Comptroller of the Currency (“OCC”) approved the Quality Control Standards for Automated Valuation Models (the “Rule”), which will require mortgage originators and secondary market issuers to ensure that algorithms used for real estate valuation, including artificial intelligence (“AI”) systems (collectively, “automated valuation models” or “AVMs”), are subject to five quality control standards designed to ensure accuracy, protect against the manipulation of data, avoid conflicts of interest, require random sample testing and reviews, and comply with applicable nondiscrimination laws. The Rule will go into effect one year after all agencies have provided final approval.[1]

The Rule follows broader federal agency efforts over the past decade since the passage of the Dodd–Frank Act to regulate AVMs. Four of the five quality control factors included in the Rule are consistent with Section 1125 of the Financial Institutions Reform, Recovery, and Enforcement Act. The final factor, compliance with applicable nondiscrimination laws, creates a new, independent obligation for covered entities to establish policies, practices, procedures, and control systems to specifically ensure compliance with such laws.

The Rule’s new factor reflects the Biden–Harris Administration’s focus on addressing discrimination and bias risks in AI policy. President Biden’s Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence directed the Director of the Federal Housing Finance Agency and the Director of the Consumer Financial Protection Bureau to “consider using their authorities . . . to require respective regulated entities” to use appropriate methodologies to evaluate automated appraisal processes “in ways that minimize bias.” In its announcement, the CFPB emphasized that the new Rule is an “example of the CFPB’s work to use existing laws on the books to police potential pitfalls when it comes to artificial intelligence.”

In this Data Blog post, we assess the Rule and offer guidance on how covered entities and financial institutions more generally can take steps toward reducing bias risks related to their use of AI tools.

A. Background

The Rule was promulgated under Section 1125 of the Financial Institutions Reform, Recovery, and Enforcement Act (as enacted by Section 1473(q) of the Dodd–Frank Act), which directed agencies to develop quality control standards for AVMs that: “(1) ensure a high level of confidence in the estimate produced by automated valuation models; (2) protect against the manipulation of data; (3) seek to avoid conflicts of interest; (4) require random sample testing and reviews; and (5) account for any other such factor that the agencies . . . determine to be appropriate.”[2]

Prior to the introduction of this Rule, the OCC, the Fed, FDIC, NCUA, CFPB, and FHFA provided separate guidance on the use of AVMs.[3]

B. Application

The Rule governs the use of AVMs by mortgage originators and secondary market issuers who “engage in credit decisions or covered securitization determinations”[4] directly or through a third party. The Rule defines an AVM as “any computerized model used by mortgage originators and secondary market issuers to determine the value of a consumer’s principal dwelling collateralizing a mortgage.” Any covered entity utilizing an AVM must adopt and maintain policies, practices, procedures, and control systems to ensure that the AVM adheres to quality control standards designed to:

  • Ensure a high level of confidence in the estimates produced;
  • Protect against the manipulation of data;
  • Seek to avoid conflicts of interest;
  • Require random sample testing and reviews; and
  • Comply with applicable nondiscrimination laws.[5]

The Rule also applies to covered entity use of third-party AVMs. However, the Rule does not apply to the use of AVMs in the following scenarios:

  • Monitoring of the quality or performance of mortgages or mortgage-backed securities.
  • Reviewing of the quality of already-completed determinations of the value of collateral; and
  • Developing an appraisal by a certified or licensed appraiser.

A key feature of the Rule’s design is “flexibility” obtained by way of broad, non‑prescriptive requirements. To accommodate differing compliance needs across institutions of varying sizes, business models, and risk profiles, the promulgating agencies declined to issue additional guidance beyond the general requirements laid out above, and in the Rule Release, encouraged institutions to “review and consider existing guidance” and “refine their implementation of the rule” “to evolve along with AVM technology” in developing AVM quality control policies.

C. Compliance Takeaways

In the one-year lead-up to the Rule going into effect, covered entities should ensure that their model and AI governance frameworks are appropriately designed for compliance. To ensure compliance with the new nondiscrimination quality control factor in particular, covered entities using AVMs should consider taking the following steps:

  • Ensure Model and AI Governance Programs are Appropriately Designed to Address Risk. The Rule Release acknowledged that covered entities should implement the Rule using measures that appropriately reflect the risks and complexities of the entity’s business and use of AVM technology. Model and AI governance programs, policies, procedures, and practices, should be right-sized in accordance with associated risk. Covered entities should also ensure that they periodically assess their governance programs to adequately address any new risks that may arise due to rapidly evolving AVM technology.
  • Prepare for Scrutiny of Quality Control Standards and Document Compliance. Based on this risk assessment, covered entities should ensure their quality control standards are designed for compliance. Existing OCC, the Fed, FDIC, NCUA, CFPB, and FHFA guidance provides suggestions for developing policies, practices, procedures, and control systems designed to ensure the accuracy, reliability, and independence of AVMs and involved data. Covered entities should review this guidance, consider whether any updates to their quality control standards or testing processes are required, and document any updates and sample testing.
  • Implement Third-Party Risk Management. The Rule requires the regulated entity to ensure that AVMs used in its valuations are subject to appropriate quality control standards, even if the AVM is developed or operated by a vendor. In addition to existing guidance from federal financial regulators on third-party risk management, covered entities might consider whether additional risk management measures are appropriate, including the following:

○ Requiring the vendor to have a written policy on model or AI risk management that applies to the AVM and addresses bias risks; and

○ Requiring the vendor to provide bias trainings to relevant employees on detecting and preventing bias in the design and operation of the AVM.

  • Consider Whether to Leverage AVM Vendors for Bias Assessments. Although the rule places the burden of compliance on covered entities using AVMs, the vendor providing such AVMs may be best positioned to conduct a bias assessment of the tool for potential noncompliance with nondiscrimination laws. Covered entities should determine whether and the extent to which they should rely on the AVM vendor’s assessment, or whether in-house or other third-party vendor assessment is appropriate. The Rule does not prescribe a specific method for bias assessment. Rather, the Rule Release acknowledged that “an array of tests and reviews” could support the nondiscrimination requirement. At a minimum, these assessments should consider:

○ Sample bias, due to use of non-representative data used to train the AVM, which may occur through data collection, data selection, pre-processing;

○ Design bias, due to integration of human biases through assumptions made in the development, implementation, operation, and maintenance of the AVM; and

○ Proxy bias, due to the AVM relying on proxies for protected classes to make valuation decisions or create other outputs.

  • Determine What Other Bias Mitigation Steps Should Be Adopted. In addition to the assessment of AVMs, companies might consider whether additional risk mitigation measures are required. These may include the following:

○ Model retraining and testing;

○ Human oversight (g., human review of inputs or outputs);

○ Bias training for employees and vendors who develop, approve, use, monitor, assess, or exercise human oversight over an AVM; and

○ Ongoing monitoring and bias assessments.

As the pace of AI adoption increases and federal financial regulators begin to elaborate their policy positions on AI, financial institutions should closely monitor these developments and corresponding industry best practices in anticipation of increased regulatory oversight, particularly with respect to discrimination and bias.

The authors would like to thank Debevoise Summer Law Clerk Henry Maguire for his work on this Debevoise Data Blog.

To subscribe to the Data Blog, please click here

The Debevoise Data Portal is an online suite of tools that help our clients quickly assess their federal, state, and international breach notification and substantive cybersecurity obligations. Please contact us at dataportal@debevoise.com for more information.

The cover art used in this blog post was generated by DALL-E.

[1]      The Rule is awaiting final approval from the Federal Deposit Insurance Corporation (“FDIC”), the Federal Reserve Board (“the Fed”), the National Credit Union Administration (“NCUA”), and the Federal Housing Finance Agency (“FHFA”).

[2]      12 U.S.C. 3354(a).

[3]      Joint guidance by the OCC, Fed, FDIC, and NCUA: Interagency Appraisal and Evaluation Guidelines, 75 FR 77450, 77468 (Dec. 10, 2010). FHFA’s guidance: Supplement Guidance to Advisory Bulletin 2013-07 – Model Risk Management Guidance 2013-07, FHFA Advisory Bulletin 2022-03 (Dec. 21, 2022); Model Risk Management Guidance, FHFA Advisory Bulletin 2013-07 (Nov. 20, 2013); and Oversight of Third-Party Provider Relationships, FHFA Advisory Bulletin 2018-08 (Sept. 28, 2018). OCC’s guidance: Supervisory Guidance on Model Risk Management, OCC Bulletin 2011-12 (Apr. 4, 2011); Comptroller’s Handbook, Model Risk Management (Aug. 2021); and Third-Party Relationships: Interagency Guidance on Risk Management, OCC Bulletin 2023-17 (June 6, 2023). The Fed’s guidance: Guidance on Model Risk Management, Federal Reserve Board SR Letter 11-7 (Apr. 4, 2011); Interagency Guidance on Third-Party Relationships: Risk Management, Federal Reserve Board SR Letter 23-4 (June 7, 2023); Guidance on Managing Outsourcing Risk, Federal Reserve Board SR Letter 13-19 (Dec. 5, 2013); and Third-Party Risk Management: A Guide for Community Banks, Federal Reserve Board (May 2024). NCUA’s guidance: Evaluating Third Party Relationships, NCUA Supervisory Letter 07-01 (Oct. 2007); and Due Diligence Over Third Party Service Providers, NCUA Letter 01-CU-20 (Nov. 2001). FDIC’s guidance: Adoption of Supervisory Guidance on Model Risk Management, FDIC FIL-22-2017 (June 7, 2017); Interagency Guidance on Third-Party Relationships: Risk Management, FDIC (June 6, 2023); and Third-Party Risk Management, A Guide for Community Banks, FDIC FIL-19-2024 (May 3, 2024). CFPB’s guidance: CFPB, Compliance Bulletin and Policy Guidance; 2016–02, Service Providers (Oct. 31, 2016); and CFPB, Examination Procedures – Compliance Management Review (Aug. 2017).

[4]      A covered securitization determination is defined as “a determination regarding: (1) whether to waive an appraisal requirement for a mortgage origination in connection with its potential sale or transfer to a secondary market issuer; or (2) structuring, preparing disclosures for, or marketing initial offerings of mortgage-backed securitizations.”

[5]      Notably, this quality control factor was not included in Section 1125—in the Rule Release, the promulgating agencies clarified that they added this factor pursuant to their authority to “account for any other such factor” determined to be “appropriate,” in light of “increasing concerns” about the potential for AVMs and data used with AVMs “to produce property estimates that reflect discriminatory bias, such as by replicating systemic inaccuracies and historical patterns of discrimination” and to “heighten awareness among lenders of the applicability of nondiscrimination laws to AVMs.”

Author

Courtney M. Dankworth is a litigation partner who focuses her practice on internal investigations and regulatory defense, including banking enforcement actions and disputes related to financial services and consumer finance.

Author

Avi Gesser is Co-Chair of the Debevoise Data Strategy & Security Group. His practice focuses on advising major companies on a wide range of cybersecurity, privacy and artificial intelligence matters. He can be reached at agesser@debevoise.com.

Author

Matthew Kelly is a litigation counsel based in the firm’s New York office and a member of the Data Strategy & Security Group. His practice focuses on advising the firm’s growing number of clients on matters related to AI governance, compliance and risk management, and on data privacy. He can be reached at makelly@debevoise.com

Author

Jehan Patterson is a litigation counsel based in the firm’s Washington, D.C. office and a member of the firm’s White Collar & Regulatory Defense Group. Her practice focuses on advising the firm’s financial institutional clients on matters related to consumer finance law and enforcement.

Author

Michelle Huang is an associate in the Litigation Department.

Author

Karen Joo is an associate in the Litigation Department at Debevoise. She can be reached at hjoo@debevoise.com.