On December 8, District of Columbia Attorney General Karl A. Racine transmitted the “Stop Discrimination by Algorithms Act of 2021 (Act)” for consideration and enactment by the Council of the District of Columbia. While discrimination of various types is prohibited by a variety of federal and D.C. laws, this bill would apply to a broader range of industries; impose affirmative requirements, including an annual self-audit and reporting requirement; and provide enforcement authority to the Office of the Attorney General for the District of Columbia (AG). The bill also includes a private cause of action, penalties up to $10,000 per violation, and punitive damages. In his press release, AG Racine had some pointed comments about algorithms and artificial intelligence:

“Not surprisingly, algorithmic decision-making computer programs have been convincingly proven to replicate and, worse, exacerbate racial and other illegal bias in critical services that all residents of the United States require to function in our treasured capitalistic society. That includes obtaining a mortgage, automobile financing, student loans, any application for credit, health care, assessments for admission to educational institutions from elementary school to the highest level of professional education, and other core points of access to opportunities to a better life. This so-called artificial intelligence is the engine of algorithms that are, in fact, far less smart than they are portrayed, and more discriminatory and unfair than big data wants you to know. Our legislation would end the myth of the intrinsic egalitarian nature of AI.”

The Act, if passed, would prohibit covered entities from making an algorithmic eligibility determination or an algorithmic information availability determination on the basis of an individual’s or class of individuals’ actual or perceived race, color, religion, national origin, sex, gender identity or expression, sexual orientation, familial status, source of income, or disability in a manner that segregates, discriminates against, or otherwise makes important life opportunities unavailable to an individual or class of individuals. In addition, any practice that has the effect or consequence of violating the above prohibition would be deemed to be an unlawful discriminatory practice.

The Act also requires that each covered entity:

  • Audit its algorithmic eligibility determination and algorithmic information availability determination practices to determine, among other things, whether such practices are discriminatory;
  • Annually send a report of the above-mentioned audit to the AG’s office;
  • Send an adverse action notice to affected individuals if the adverse action is based in whole or in part on the results of an algorithmic eligibility determination;
  • Develop a notice that details how it uses personal information in algorithmic eligibility determinations and algorithmic information availability determinations;
  • Send the above-mentioned notice to affected individuals before its first algorithmic information availability determination and make the notice continuously and conspicuously available; and
  • Require service providers by written agreement to implement and maintain measures to comply with the Act if the covered entity relies in whole or in part on the service provider to conduct an algorithmic eligibility determination or an algorithmic information availability determination.

The AG’s office would have enforcement authority for the Act, including the ability to impose civil money penalties of $10,000 for each violation. For individual claimants, the Act includes a private right of action, where aggrieved persons may recover up to $10,000 per violation. In addition, either action could result in the violating party paying punitive damages and/or attorney’s fees.

The Act’s definitions are significant:

Covered entity captures nearly any individual or entity that either makes algorithmic eligibility determinations or algorithmic information availability determinations, or relies on algorithmic eligibility determinations or algorithmic information availability determinations supplied by a service provider, and that meets one of the following criteria:

  • Possesses or controls personal information on more than 25,000 D.C. residents;
  • Has greater than $15 million in average annualized gross receipts for the three years preceding the most recent fiscal year;
  • Is a data broker, or other entity, that derives 50% or more of its annual revenue by collecting, assembling, selling, distributing, providing access to, or maintaining personal information, and some proportion of the personal information concerns a D.C. resident who is not a customer or an employee of that entity; or
  • Is a service provider.

Important life opportunities means access to, approval for, or offer of:

  • credit,
  • education,
  • employment,
  • housing,
  • a place of public accommodation, or

Algorithmic eligibility determination means a determination based in whole or in significant part on an algorithmic process that utilizes machine learning, artificial intelligence, or similar techniques to determine an individual’s eligibility for, or opportunity to access, important life opportunities.

Algorithmic information availability determination means a determination based in whole or in significant part on an algorithmic process that utilizes machine learning, artificial intelligence, or similar techniques to determine an individual’s receipt of advertising, marketing, solicitations, or offers for an important life opportunity.

As currently written, the Act provides no compliance grace period upon enactment. Instead, the Act would be effective upon publication in the D.C. Register.

The AG’s proposed Act is another example of regulators seeking to address the potential for discrimination in algorithms. In November 2021, House Financial Services Committee Chairwoman Maxine Waters sent a letter to the leaders of multiple federal regulators, asking them to monitor technological development in the financial services industry to ensure that algorithmic bias does not occur (see our blog post here). Further, CFPB Director Rohit Chopra has remarked in the past that “black-box algorithms relying on personal data can reinforce societal biases, rather than eliminate them.”

We will continue to monitor developments related to regulation of algorithmic models at both the state and federal level.