On October 30, 2023, President Biden issued Executive Order 14110, aiming to ensure the responsible and safe development and use of Artificial Intelligence (AI) in federal hiring. In compliance, the US Department of Labor’s Office of Federal Contract Compliance Programs (OFCCP) has released guidance for federal contractors to prevent discrimination in artificial intelligence-driven hiring practices.
Although the OFCCP focuses on the government contractors’ duty to use AI in compliance with the law, this guidance is useful not only for federal contractors but any employer that uses AI for employment-related decisions. In April 2023, the Consumer Finance Protection Bureau, the Department of Justice Civil Rights Division, the EEOC, and the FTC issued a “Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems.” The Joint Statement explained, literally in bold letters: “Automated Systems May Contribute to Unlawful Discrimination and Otherwise Violate Federal Law.” Since April, several more agencies have joined the pledge. The agencies “pledge[d] to vigorously use [their] collective authorities to protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies.” So, whether you have a federal contract or not, the OFCCP’s advice will help you avoid unwanted government attention.
Starting with the basics, AI is a machine-based system that can perform actions that typically require human intelligence. Simply put, AI can take in an enormous amount of data, detect patterns in that data, and offer suggestions that follow those patterns. Every time your phone suggests how to complete a sentence, or you ask your GPS for directions, or Netflix suggests something for you to watch, you are using AI.
For example, your phone might suggest how you want to respond to a text. Your phone is not “thinking” as we understand the word. Rather, it has recognized how most people respond to texts like the one you received, and it suggests a response based on that response. So, if you receive a text that says, “Thank you,” your phone will probably suggest “You’re welcome.”
Or, to take another step, if you start typing, “I look forward,” your phone may suggest “to seeing you.” That is the pattern the phone’s AI has detected, and it will perpetuate unless it is told not to.
Since the advent of ChatGPT in December 2022, employers have seen the possibilities of AI in employment decisions. Employers widely use AI to streamline workflows and assist in decision-making processes. Just like AI can respond to a “thank you” text or complete a sentence, it can detect a pattern in employment decisions and perpetuate it. Using this pattern detection and perpetuation ability, AI can automate various HR tasks, from resume screening to performance evaluations. AI can also help HR professionals sort through resumes or determine criteria for employment decisions, such as hiring or promotion.
Here comes the rub: AI’s eagerness to perpetuate patterns it recognizes can lure well-meaning but careless employers into actions that violate federal discrimination laws. Essentially, AI can recognize discriminatory content, even when humans may not. Having recognized a discriminatory pattern, AI will apply that pattern in its output, thus embedding and perpetuating the discrimination that it noticed. In other words, just as children are much better at imitating their parents than following their parents’ instructions, AI is much better at noticing and applying discriminatory patterns than complying with efforts to eliminate discriminatory outcomes. So, if an AI discriminates, it is generally because it was trained on existing data or modeled on behavior or goals set in the human world, and that data or behavior turns out to be discriminatory.
None of that will serve as an excuse for companies using AI for hiring or promotion purposes. They must ensure their AI systems do not perpetuate unlawful bias or discrimination.
To this end, the OFCCP outlines several compliance obligations:
- Contractors must maintain records of AI system usage and ensure their confidentiality.
- Contractors must provide necessary information about their AI systems during compliance evaluations.
- Contractors must accommodate applicants or employees with disabilities in their AI-driven processes.
The OFCCP investigates the use of AI in employment decisions during compliance evaluations and complaint investigations. Contractors must validate AI systems that have an adverse impact on protected groups, ensuring these systems meet the Uniform Guidelines on Employee Selection Procedures (UGESP).
To protect your client against lawsuits or enforcement actions based on AI bias, you should advise your client in AI governance – that is, the ability to direct, manage, and monitor an organization’s AI activities. The OFCCP suggests some AI governance guidelines for AI use, including:
- Informing applicants and employees about the use of AI, including how the employer will capture, use, and protect the AI
- Ensure human oversight in hiring and promotion decisions. AI is not a “set it and forget it” device. A human must ensure the AI is acting in compliance with the law, and the human should get involved in the process sooner rather than later. Imagine how much better off Amazon would have been if someone had noticed earlier the sudden influx of lacrosse-playing Jareds.
- Contractors should regularly monitor AI system outputs for disparate impacts and take steps to mitigate any identified biases. This includes assessing whether the AI system’s training data may reproduce existing workplace inequalities.
- Most companies buy a vendor-created AI system. If using an off-the-shelf AI system, contractors should verify that the vendor maintains records consistent with OFCCP requirements and ensure the system is fair, reliable, and transparent. Contractors remain responsible for compliance with third-party tools.
By implementing these guidelines, federal contractors and other employers can leverage AI’s benefits while safeguarding against its potential pitfalls. This proactive approach aligns with the broader goal of fostering an inclusive and equitable workplace in the age of AI.