Companies have increasingly leveraged artificial intelligence (“AI”) to facilitate decisions in the extension of credit and financial lending as well as hiring decisions.  AI tools have the potential to produce efficiencies in processes but have also recently faced scrutiny for AI-related environmental, social, and governance (“ESG”) risks.  Such risks include AI ethical issues related to the use of facial recognition technology or embedded biases in AI software that may potentially perpetuate racial inequality or have a discriminatory impact on minority communities.  ESG and diversity, equity, and inclusion (“DEI”) advocates, along with federal and state regulators, have begun to examine the potential benefit and harm of AI tools vis-à-vis such communities.  

            As federal and state authorities take stock of the use of AI, the benefits of “responsibly audited AI” has become a focal point and should be on companies’ radars.  This post defines “responsibly audited AI” as automated decision-making platforms or algorithms that companies have vetted for ESG-related risks, including but not limited to discriminatory impacts or embedded biases that might adversely impact marginalized and underrepresented communities.  By investing in responsibly audited AI, companies will be better positioned to comply with current and future laws or regulations geared toward avoiding discriminatory or biased outputs caused by AI decision-making tools.  Companies will also be better poised to achieve their DEI goals. 

Federal regulatory and legislative policy and AI decision-making tools

            There are several regulatory, policy, and legislative developments focused on the deployment of responsibly audited AI and other automated systems.  For example, as part of the Biden-Harris Administration’s recently announced Blueprint for an AI Bill of Rights, the Administration has highlighted key principles companies should consider in the design, development, and deployment of AI and automated systems in order to address AI-related biases that can impinge on the rights of the general public.

            At the federal regulatory agency level, the Consumer Financial Protection Bureau (“CFPB”) published guidance in May 2022 for financial institutions that use AI tools.  The CFPB guidance addresses the applicability of the Equal Credit Opportunity Act (“ECOA”) to algorithmic credit decisions and clarifies that creditors’ reporting obligations under the ECOA extend equally to adverse decisions made using “complex algorithms.” 

            In March 2021, the Federal Deposit Insurance Corporation, alongside the Board of Governors of the Federal Reserve System, the Office of the Comptroller of the Currency, the CFPB, and the National Credit Union Administration issued a Request for Information (“RFI”), soliciting comments on financial institutions’:

  • use of AI in their provision of services to customers and for other business or operational purposes;
  • governance, risk management, and controls over AI; and
  • any challenges in developing, adopting, and managing AI.

            The Equal Employment Opportunity Commission (“EEOC”) launched its AI and Algorithmic Fairness Initiative (the “Initiative”) in 2021, to study the fairness of AI tools and to guide “applicants, employees, employers, and technology vendors in ensuring that [AI] technologies are used fairly, consistent with federal equal employment opportunity laws.”  In May 2022, the Initiative issued a technical assistance document.  This technical assistance guidance highlights the ways in which AI tools utilized in the job application process could violate Title I of the Americans with Disabilities Act. 

            Additionally, U.S. Senator Ron Wyden, U.S. Senator Cory Booker, and Representative Yvette Clarke introduced, in February 2022, the Algorithmic Accountability Act of 2022.  The Act covers “automated decision systems” that companies use to make “critical decisions” affecting employment and financial services decisions.  Under the proposed legislation, companies relying on automated decision systems to make “critical decisions” are required to conduct bias-impact assessments of their technology––using guidelines issued by the Federal Trade Commission (“FTC”)––and to further report their findings to the FTC.

State-level regulations of AI that should be on companies’ radars

            At the state level, authorities are also pursuing legislation addressing AI use in the employment and consumer finance sectors.  New York City’s Local Law 144, for example, set to take effect on January 1, 2023, will impose bias-audit and notification requirements on employers using AI tools in the hiring or promotion process.  A recent Covington blog post details the need-to-know implications of Local Law 144.  Similarly, in the District of Columbia, a pending bill titled Stop Discrimination by Algorithms Act of 2021 seeks to “prohibit users of algorithmic decision-making in a discriminatory manner” in employment, housing, healthcare, and financial lending.  The bill proposes a private right of action for individual plaintiffs, with remedies such as injunctive relief, punitive damages, and attorney’s fees.  Finally, in California, the Fair Employment & Housing Council released a March 2022 draft bill extending the state’s employment discrimination regulations to automated-decision systems.  Under this proposed bill, actions that are based on decisions made or facilitated by automated-decision systems may constitute unlawful discrimination if the action results in disparate impact (even if the effect was unintended).  

Responsibly audited AI, accountability, and achievement of DEI goals

            In addition to mitigating legal risk under current and prospective laws and regulations, responsibly audited AI can help companies remain accountable to their DEI goals by helping to reduce the effects of implicit bias on decision-making processes.  For example, a 2022 EEOC and U.S. Department of Labor roundtable discussion titled “Decoded:  Can Technology Advance Equitable Recruiting and Hiring?” provides tangible steps that companies, across all sectors, can use to leverage the benefits of AI:

  • Be Informed:  Companies should be informed consumers and learn about AI products before integrating them into their existing systems and processes.  For instance, companies should discern the type of data pools the AI tools rely on to make decisions.  Companies should query whether the data pools are inclusive and whether they account for historic and systemic biases. 
  • Audit AI Decisions:  Before implementing AI tools, technology audits, conducted either by the company supplying the AI product or an independent third party, can help companies learn in advance whether particular tools are capturing certain groups of people but excluding others.
  • Utilize Diversity Analytics:  After implementing AI tools, diversity analytics can assist companies in evaluating whether they are achieving their stated DEI goals and reaching underrepresented communities.

            Companies should continue or begin to consider their investments in responsibly audited AI in order to mitigate their ESG/AI-related risk and to potentially benefit from ESG/AI-related opportunities.  Responsibly audited AI could help companies comply with current and prospective federal and state laws or regulations geared toward addressing discriminatory or biased outputs caused by AI.  Moreover, companies that utilize responsibly audited AI will be able to provide greater transparency and accountability to their stakeholders regarding their progress toward their ESG and DEI goals.            

If you have any questions about the material covered above, please contact Covington members of our ESG or Data Privacy teams.

Photo of Molly Prindle Molly Prindle

Molly Prindle is an associate in the firm’s Washington, DC office, where she is a member of the Litigation and Investigations Practice Group.

Prior to joining the firm, Molly clerked for Judge Ronald L. Gilman of the U.S. Court of Appeals for the…

Molly Prindle is an associate in the firm’s Washington, DC office, where she is a member of the Litigation and Investigations Practice Group.

Prior to joining the firm, Molly clerked for Judge Ronald L. Gilman of the U.S. Court of Appeals for the Sixth Circuit and Chief Judge Mark R. Hornak of the U.S. District Court for the Western District of Pennsylvania. Molly earned her J.D. from American University Washington College of Law, where she served as Editor-in-Chief of the American University Law Review.