Developers of artificial intelligence-based video interviewing systems promote their technology as one that helps human resource professionals on-board new talent faster, less expensively, and with greater insight compared to traditional human-only interviewing techniques. They also contend that their systems can avoid some of the potential implicit biases that may appear before, during, and after interviews, thus reducing risks to companies while leveling the playing field for qualified job applicants.
But because those AI system have the potential to collect, store, and use data reflecting a job candidate’s face and voice, lawmakers in Illinois passed the Artificial Intelligence Video Interview Act on May 29, 2019, by a vote of 114-0, placing restrictions on companies using AI video interviews without consent and adding to work place restrictions in Illinois that have been in place for over a decade following enactment of the state’s Biometric Information Privacy Act (BIPA) in 2008. It is reported that Governor Pritzker, a Democrat, will sign the new legislation. In doing so, he will solidify the state’s preference for so-called “hard law,” command-and-control style regulatory schemes over soft-law and outright bans that others have chosen when it comes to governing certain controversial AI technology use cases.
Ostensibly, the law’s video destruction requirement looks good on paper, but privacy hawks are sure to point out that the law may not address destruction of sensitive behavioral data derived from AI-based video interview systems. Businesses using AI-based video interviewing systems to assess applicants for jobs based in Illinois may want to consider how they handle such “derived from” data as part of their compliance with the new law (assuming it’s signed into law), other applicable laws (such as BIPA and other state’s privacy laws), and their internal data assurance and data privacy policies.
How AI Video Interviews Work
Job applicants today may be asked by a potential employer to upload a recording of themselves answering a pre-determined set of questions. Applicant’s may also be asked to participate in live video-chat interviews and in-person interviews, both of which may be recorded. In either case, AI-based video interviewing technologies can help companies identify micro facial expressions and vocal intonations in those recordings, ones that might escape a human interviewer’s perception. Combined with other features about a candidate (for example, demographic information submitted on a job application), an AI system can help HR professionals make a binary hire/no-hire decision using a high-dimensional feature set unique to each candidate.
As shown in Figure 1, such a system may take audio-video data collected during a job candidate’s interview and processes the audio (speech) into text using automated speech recognition (ASR) algorithms to create a transcript. The ASR algorithm can do this by extracting features from an acoustic image (a 2-dimentional image, time sliced into, say, 10 millisecond frames) corresponding to input sound. For example, a spectral image of the sound corresponding to the utterance “hello” may be processed by an ASR algorithm to generate the phonetic letters “h-e-l-o” and then the word “h-e-l-l-o.” Once the sound has been converted into text, a machine learning (ML) model (such as a cloud-based convolutional or recurrent neural network) can be used to classify (label) all the text according to its semantic and/or sentiment characteristics for purposes of estimating the speaker’s affective state. The interviewer can then seek insight into the applicant’s verbal responses, degree of emotional engagement, and level of positivity exhibited, which could be compared to other applicants. The ML algorithms may be trained, for example, using available speech datasets (labeled data). As shown in Figure 1, data derived from the original audio-video recording may be stored in one or more databases.
FIG. 1. Basic audio-video data work flow beginning with extracting features from time-series data (Xt), which may be stored and then used as input into a ML model to produce discrete labels (classifications) over time (Yt) (e.g., confusion/joy/surprise, or simply discrete binary hire/no hire).
On the video side, the AI system may pre-processes the video data, if necessary, to align the speaker’s face toward a face-forward position so that all the traditional facial landmarks may be tagged. The “reconstructed” face and landmarks data may be stored. Then, a ML model (e.g., convolutional neural network) trained on face data sets can be used to classify time-series image slices for purposes of estimating the speaker’s current and changing affect state, such as classifying the candidate’s emotional state over time as “joy” or “confusion” or some other emotion during an interview, and the degree of those emotional states, which again may be compared to others. That assessed data may be stored for future use.
Data collected during video interviews is considered biometric information, but, as described above, it also reflects a person’s behavioral traits, making the data itself, and algorithmic assessments of the data, arguably some of the most sensitive from a privacy perspective.
The Artificial Intelligence Video Interview Act
The Illinois law would place restrictions on businesses that (1) request applicants to agree to have their job interview recorded and (2) that use an artificial intelligence technology to process that recording to assess the candidate’s fitness for the job. A regulated business would be required to do the following when considering an applicant for positions based in Illinois before asking applicants to submit video interviews:
(1) Notify each applicant before the interview that artificial intelligence may be used to analyze the applicant’s video interview and consider the applicant’s fitness for the position.
(2) Provide each applicant with information before the interview explaining how the artificial intelligence works and what general types of characteristics it uses to evaluate applicants.
(3) Obtain, before the interview, consent from the applicant to be evaluated by the artificial intelligence program as described in the information provided. An employer may not use artificial intelligence to evaluate applicants who have not consented to the use of artificial intelligence analysis.
The Illinois law would prohibit an employer from sharing an applicant’s video, except with persons whose expertise or technology is necessary in order to evaluate an applicant’s fitness for a position. Upon request from the applicant, the law would require employers, within 30 days after receipt of the request, to delete an applicant’s recorded interview and instruct any other persons who received copies of the applicant video interview to also delete the video, including all electronically generated backup copies. The law would require such other persons to comply with the employer’s instructions.
Leaving Personal Data Crumbs Behind
Other than algorithmic decision making systems (not all of which employ an AI technology), AI use cases involving the collection, storage, and processing of face data have been targeted by lawmakers over most other AI use cases. This focus is driven in large part by privacy concerns surrounding human face data, and often results in strict measures surrounding collection, storage, and use of the data. Notably, the Illinois AI video interview law only requires deletion of an applicant’s video recording by a business and third parties that received the recording, upon request from the applicant. The law does not explicitly state that feature (vector) data extracted from the video, or the AI model’s output (i.e., “derived data”), has to be deleted, though it’s certainly possible that future court interpretations of the term “video interviews” and/or future implementing regulations could very well expand what it means to destroy a video.
Absent such guidance, however, businesses that use AI-based video interviews may wish to consider developing clearly-defined terms of service and/or privacy policies that address how job interview audio-video recordings will be used, including how related derived data from recordings will be used, as part of the company’s notification and consent policies. While a business must delete job interview video recordings under the new law if asked by applicants, it may have reasons for wanting to retain feature data sets and model outputs, for example in the case of no-hire decisions (for purposes of potential litigation). In those instances where an applicant who agreed to an AI-processed video interview is hired for a job in Illinois, a business may need to consider BIPA’s requirements related to data derived from a biometric identifier that may apply to the new employee.
A business that uses AI-based video interviews should also consider the different ways their systems may be deployed so that it may appropriately respond to an applicant’s video destruction request. For example, as reflected in Figure 2, an AI system may involve the interviewee’s laptop camera located in Texas, a video recording saved at the company’s server in Illinois, video data sent to and stored at a cloud server in Virginia for processing by a ML algorithm, and a company’s HR personnel accessing the feature data set and model output via a desktop application running on a computer in California, all related to a job “based in” Illinois. To that end, businesses that need to comply with the new law may need to track where the video recording and derived data are stored, and may also want to revisit the terms of service they agreed to with third parties who provide cloud-based AI platform services that process their job applicant interview videos to ensure that those terms of service (and third party privacy policies) are consistent with what the business is telling job candidates about video and data destruction.
FIG. 2. Hypothetical video interview scenario: Illinois (Company’s location and on-prem web/data servers); Texas (job seeker uses browser-based UI for video interview served from Co.’s data server); Virginia (cloud server for ML video data processing; local storage or sends to Co.’s database); California (company’s HR accesses video and output data on cloud platform and Co.’s database)
As lawmakers and stakeholders continue to scrutinize AI technology use cases, especially as they raise concerns about privacy, disparate impacts, and other civil rights concerns, regulated businesses that make or use AI technologies may need to observe where a law’s data handling and destruction requirements do not address all possible data privacy issues. In doing so, the company can decide how best to address gaps in relevant laws and regulations as part of its risk mitigation processes.