On February 23, the Consumer Financial Protection Bureau (CFPB) issued an outline of proposals and alternatives (Outline) under consideration related to an automated valuation model (AVM) rulemaking. Despite the lack of imminency on a final rule, the Outline serves as further proof that fair lending and its application to algorithmic systems is a top priority for the CFPB, as well as other regulators at both the federal and state levels.

The Dodd-Frank Wall Street Reform and Consumer Protection Act added Section 1125 of the Financial Institutions Reform, Recovery, and Enforcement Act (FIRREA). Section 1125 requires that AVMs meet quality control standards designed to:

  • ensure a high level of confidence in the estimates produced by automated valuation models;
  • protect against the manipulation of data;
  • seek to avoid conflicts of interest;
  • require random sample testing and reviews; and
  • account for any other such factor that the agencies determine to be appropriate.

Section 1125 also provides that the CFPB, the Federal Reserve, the Office of the Comptroller of the Currency (OCC), the Federal Deposit Insurance Corporation (FDIC), the National Credit Union Administration (NCUA), and the Federal Housing Finance Agency (FHFA) “shall promulgate regulations to implement the quality control standards required” under Section 1125. The CFPB prepared the Outline for use in the Small Business Regulatory Enforcement Fairness Act (SBREFA) Small Business Review Panel process, during which the panel will solicit feedback and assess the impact of the potential rule on small entities.

Among other things, the Outline covers the scope of potential eventual rule requirements. In regard to the first four standards listed above, the CFPB appears to be set on requiring regulated institutions to maintain policies and procedures to ensure that AVMs used for covered transactions adhere to the specified quality control standards. However, the CFPB is weighing whether to (1) provide regulated institutions flexibility in developing these policies and procedures, or (2) impose prescriptive requirements that would be more detailed and specific.

For the fifth standard, the CFPB is considering specifying “nondiscrimination quality control criteria” as an additional standard. Noting that the use of algorithmic systems, such as AVMs, are subject to the Equal Credit Opportunity Act (ECOA) and the Fair Housing Act (FHA), the CFPB states that it is considering the potential positive and negative consumer and fair lending implications of the use of AVMs. In its discussion of fair lending concerns, the CFPB reiterates several points that have become the hallmark of CFPB Director Rohit Chopra’s views on algorithmic systems. The Outline provides the following:

The “black box” nature of many algorithms, including those used in AVMs, introduces additional fair lending concern. The complex interactions that machine learning algorithms engage in to form a decision can be so opaque that they are not easily audited or understood. This makes it challenging to prevent, identify, and correct discrimination.

The Outline goes on to note that algorithmic systems can “replicate historical patterns of discrimination or introduce new forms of discrimination because of the way a model is designed, implemented, and used.”

A final rule is not on the immediate horizon. The CFPB is requesting all small entity feedback on the Outline by April 8, and then feedback from other stakeholders by May 13. From there, the CFPB will still need to issue a notice of proposed rulemaking, which will go through its own comment process, before issuing a final rule. In the Outline, the CFPB notes that it is looking at a 12-month implementation period once the final rule is issued.

The Outline’s comments about the fair lending concerns arising from the use of algorithmic decision-making are part of a larger regulatory and consumer advocacy effort to address perceived algorithmic bias. In November 2021, House Financial Services Committee Chairwoman Maxine Waters sent a letter to the leaders of multiple federal regulators, asking them to monitor technological development in the financial services industry to ensure that algorithmic bias does not occur (see our blog post here). Then, in December 2021, the D.C. attorney general transmitted the “Stop Discrimination by Algorithms Act of 2021” for consideration and enactment by the Council of the District of Columbia (see our blog post here).

We know that algorithms can be transparent in their decision-making, fairer than models built using traditional techniques, and can make credit decisions both more accurate and more inclusive. We will continue to monitor developments related to regulation of algorithmic models at both the federal and state level.