All employers in New York City that rely on artificial intelligence (AI) in their hiring processes are potentially subject to new requirements, beginning January 1, 2023, that they ensure their AI tool has undergone a “bias audit” within the past year, provide advance notice to job applicants regarding the use of the tool, and publicly post on the employer’s website a summary of the results of the most recent bias audit and distribution data for the tool.  The law imposes hefty civil penalties on employers that fail to comply.

For months, employers have viewed this new requirement (the first of its kind in the country) with both puzzlement and foreboding, awaiting further guidance from the city as to what exactly it means by a bias audit and who should be providing the requisite certification.  The New York City Department of Consumer and Worker Protection has issued proposed rules to address those questions, with a public hearing scheduled for 11 am on Monday, October 24, 2022.

The proposed rules are notable in:

  • their definition of when an AI tool (which they refer to as an “automated employment decision tool” or “AEDT”) is subject to the new law;
  • their definition of who needs to conduct the bias audit;
  • what data needs to be analyzed and what information must be posted on employers’ websites; and
  • requirements related to how the data should be posted and additional notices to be provided.

When is the use of an AI tool subject to audit

The proposed rules narrow the scope of covered technology through their definition of an AEDT.  The law states that a bias audit is required whenever an AI tool is used “to substantially assist or replace discretionary decision making.”  The Department of Consumer and Worker Protection is proposing that the standard of substantial assistance or replacement applies in three situations:

  • when the employer relies solely on a simplified output (score, tag, classification, ranking, etc.), with no other factors considered;
  • when an employer uses such a simplified output as one of a set of criteria and weights that output more than any other criterion in the set; or
  • when an employer uses a simplified output to overrule or modify conclusions derived from other factors including human decision-making.

In other words, if the employer is relying on the AI tool to do the heavy lift on screening applicants at any stage of the hiring process, then it will likely need to comply with the bias audit requirement.  If, on the other hand, the AI tool is used more casually, as but one factor for consideration in screening candidates or perhaps to help flag those who may be the best match to the job description, and it carries no more weight than other criteria, then it would apparently fall outside the audit requirement.

Notably, for employers that are dabbling with AI and still relying mostly on human decision-making, as the employer’s AI tool gets “smarter” so to speak and better understands the types of factors it should be looking for when screening applicants, employers that may initially have been exempt from complying with the bias audit requirement will need to revisit that analysis.  Once the tool becomes a relied-upon and predominant factor at any stage of the screening process, the proposed rules indicate that the bias audit requirement will apply.

Who should be conducting the bias audit

The proposed rules state that an “independent auditor” should conduct the bias audit, meaning someone that is not involved in using or developing the AI tool.  This means that even if the employer did not develop the tool, as the user it cannot rely on its own in-house staff to audit the results of the tool for possible bias.  And if the tool is provided by a vendor, then the proposed rules seem to contemplate that the vendor will retain an independent third-party to conduct the audit.

What data needs to be analyzed

The third-party conducting a bias audit is being tasked with calculating two sets of numbers:

  • the “selection rate” – calculated by dividing (1) the number of individuals in a particular gender, racial or ethnic category who were selected to advance to the next level in the hiring process or were assigned to a particular classification by the AI tool by (2) the total number of individuals in that gender, racial or ethnic category who had applied or were considered for the position; and
  • the “impact ratio” – for which the calculation depends on whether the AI tool is being used to select and eliminate people, or whether it is being used to score and categorize them. If the tool handles selections, then the impact ratio is calculated as (1) the selection rate for a specific category divided by (2) the selection rate for the most selected category.  If rating or scoring candidates, then the impact ratio is calculated as (1) the average score of all individuals in a category divided by (2) the average score for the most selected category.

This is fairly standard statistical analysis for claims of adverse impact in employment practices, which is meant to flag employment practices that appear neutral but have a discriminatory effect on a protected group.  The EEOC and other government agencies typically apply a four-fifths rule or 80 percent guideline, whereby an impact ratio of less than 80 percent raises a red flag that there is an adverse impact.  Notably, though, the proposed rules require only that employers post the selection rate/average score and the impact ratio.  Nothing in the law or the proposed rules requires employers to educate job applicants on what the scores mean.

How the data should be posted

Employers are required to make available on their websites:

  • the date of the most recent bias audit;
  • a summary of the results, including selection rates and impact ratios for all categories; and
  • the date the employer began using the AI tool.

The proposed rules require that this posting be clear and conspicuous on the careers or jobs section of the employer’s website, but they allow employers to meet that requirement with an active and clearly-identified hyperlink to the data.

Additional notice requirements

Employers are also required, under the law, to provide candidates with at least 10 business days’ notice that they will be using an AI tool, the job qualifications and characteristics that the tool will assess, and allow candidates to request an alternative selection process or accommodation.  The proposed rules state that this notice can be posted on the employer’s website, included in the job posting, or individually distributed to job candidates.

Finally, employers must additionally provide employees with notice as to the type of data collected by the AI tool, the source of that data, and the employer’s data retention policy.  The proposed rules provide that employers either need to post the information on their website, or post notice to candidates on their websites that the information is available upon written request (and then comply with those requests within 30 days).

Next steps for employers

Employers that would like to comment on the proposed rules can use the contact information in the rules to call or Zoom in.  With the January effective date rapidly approaching, employers should review what AI tools they currently use in their hiring processes and how they are used.  If the tools are provided through a vendor, then the employer should consult with the vendor on whether it has conducted a bias audit and what information it can share as to the results of the audit.  If the employer has developed its own AI tools, it should look for an independent third-party that can perform the requisite selection and impact analysis.  Employers should also plan to make space on their websites for posting the results of their audit and the various notices required under the law.

Flummoxed employers are encouraged to seek legal advice on complying with the new law.  Employers may also want to revisit their hiring processes and determine whether the efficiencies gained through the tools exceed the administrative burdens of the New York City law.  That analysis will differ across industries and organizations.

By Tracey I. Levy

Facebooktwitterredditpinterestlinkedinmail

The post NYC Employers May Want to Rethink the Value Cost Proposition of Their AI Hiring Tools appeared first on Levy Employment Law.