Companies developing artificial intelligence-based products and services have been on the lookout for laws and regulations aimed at their technology. In the case of facial recognition, new federal and state laws seem closer than ever. Examples include Washington State’s recent data privacy and facial recognition bill (SB 5376; recent action on March 6, 2019) and the federal Commercial Facial Recognition Privacy Act of 2019 (S. 847, introduced March 14, 2019). If enacted, these new laws would join others like Illinois’ Biometric Information Privacy Act (BIPA) and California’s Consumer Privacy Act (CCPA) in governing facial recognition systems and the collection, storage, and use of face data. But even if those new bills fail to become law, they underscore how the technology will be regulated in the US and suggest, as discussed below, the kinds of litigation risks organizations may confront in the future.
What is Face Data and Facial Recognition Technology?
Definitions of face data often involve information that can be associated with an identified or identifiable person. Face data may be supplied by a person (e.g., an uploaded image), purchased from a third party (i.e., a data broker), obtained from publicly-available data sets, or collected via audio-video equipment (e.g., using surveillance cameras).
Facial recognition refers to extracting data from a camera’s output signal (still image or video), locating faces in the image data (an object detection process typically done using machine learning algorithms), picking out unique features from the faces that can be used to tell them apart from other people (e.g., facial landmarks), and comparing those features to all the faces of people already known to see if there is a match.
Advances in the field of computer vision, including a machine learning technique called convolutional neural networks (ConvNets or CNNs), have turned what used to be a laborious manual process of identifying faces in image data into a highly accurate and automated process performed by machines in near real-time. Online face image sources such as Facebook, Flickr, Twitter, Instagram, YouTube, news media websites, other websites, as well as face data images collected by government agencies from, among other sources, airport cameras, provide the data used to train and test CNNs.
Why are Lawmakers Addressing Facial Recognition?
Among the several AI technologies attracting lawmakers’ attention, facial recognition seems to top the list due in part to its rapidly-expanding use, especially in law enforcement, and the civil and privacy rights implications associated with the collection and use of face data, often without consent, by both private and public organizations.
From a privacy perspective, Microsoft’s President Brad Smith, writing in 2018, expressed a common refrain by those concerned about facial recognition: unconsented surveillance. “Imagine a government tracking everywhere you walked over the past month without your permission or knowledge. Imagine a database of everyone who attended a political rally that constitutes the very essence of free speech. Imagine the stores of a shopping mall using facial recognition to share information with each other about each shelf that you browse and product you buy, without asking you first. This has long been the stuff of science fiction and popular movies – like ‘Minority Report,’ ‘Enemy of the State’ and even ‘1984’ – but now it’s on the verge of becoming possible.”
Beyond surveillance, others have expressed concerns about the security of face data. Unlike non-biometric data, which is replaceable (think credit card numbers or passwords), face data represent intimate, unique, and irreplaceable characteristics of a person. Once hackers have maliciously exfiltrated a person’s face data from a business’ computer system, the person’s privacy is threatened.
From a civil rights perspective, known problems with bias in facial recognition systems have been documented. This issue became a headline in July 2018 when the American Civil Liberties Union (ACLU) reported that a widely-used, commercially-available facial recognition program “incorrectly matched 28 members of Congress, identifying them as other people who have been arrested for a crime.” The report noted that the members of Congress who were “falsely matched with the mugshot database [] used in the test include Republicans and Democrats, men and women, and legislators of all ages, from all across the country.” The report also found that the mismatches disproportionately involved members who are people of color, thus raising questions about the accuracy and quality of the tested facial recognition technique, as well as revealing its possible inherent bias. <https://www.aclu.org/blog/privacy-technology/surveillance-technologies/amazons-face-recognition-falsely-matched-28> Bias may arise if a face data set used to train a CNN contains predominantly white male face images, for example. The facial recognition technology using that algorithm may not perform well (or “generalize”) when it’s asked to classify (identify) non-white male faces. The bias issue has led many to call for the meaningful ethical-based design of AI systems.
But even beneficial uses of facial recognition technology and face data collection and use have been criticized, in part because people whose face data are being collected and used are typically not given an opportunity to give their consent (and in many cases, do not even know their face data is being used). Thus, automatically identifying people in uploaded images (a process called image “tagging”), improving a person’s experience at a public venue, providing access to a private computer system or a physical location, establishing an employee’s working hours and their movements for safety purposes, and personalizing advertisements and newsfeeds displayed on a computer user’s browser, while arguably beneficial uses of face data, are often conducted without a user’s consent (or its access/use is conditioned upon giving consent) and thus criticized.
As much as the many concerns about facial recognition may have piqued lawmakers’ interest in regulating face data, legislation like those mentioned above is just as likely to arise because stakeholders and vocal opponents have called for more certainty in the legal landscape. Microsoft, for one, in 2018 called for regulating facial recognition technology. “The only effective way to manage the use of technology by a government is for the government proactively to manage this use itself,” Brad Smith wrote, his words clearly directed to Capitol Hill as well as state lawmakers in Olympia. “And if there are concerns about how a technology will be deployed more broadly across society, the only way to regulate this broad use is for the government to do so. This in fact is what we believe is needed today – a government initiative to regulate the proper use of facial recognition technology, informed first by a bipartisan and expert commission.”
Comparing the Washington, DC and Washington State Bills
[For a summary of Illinois’ face data privacy law, click here.]
If S. 847 becomes law, it would cover any person (other than the federal government, state and local governments, law enforcement agencies, a national security agency, or an intelligence agency) that collects, stores, or processes facial recognition data from still or video images, including any unique attribute or feature of the face of a person that is used by facial recognition technology for the purpose of assigning a unique, persistent identifier or for the unique personal identification of a specific individual. SB 5376, in contrast, would cover any natural or legal persons which, alone or jointly with others, determines the purposes and means of the processing of personal data by a processor, including personal data from a facial recognition technology. While the federal bill would not cover government agency use, SB 5376 would condition Washington state and local government agencies, including law enforcement agencies, to conduct ongoing surveillance of specified individuals in public spaces only if such use is in support of law enforcement activities and either (a) a court order has been obtained to permit the use of facial recognition services for that ongoing surveillance, or (b) where there is an emergency involving imminent danger or risk of death or serious physical injury to a person.
On the issue of consent, S. 847 would require a business that knowingly uses facial recognition technology to collect face data to obtain from a person affirmative consent (opt-in consent). To the extent possible, if facial recognition technology is present, a business must provide to the person a concise notice that facial recognition technology is present, and, if contextually appropriate, where the person can find more information about the use by the business of facial recognition technology and documentation that includes general information that explains the capabilities and limitations of the technology in terms that the person is able to understand. SB 5376 would also require controllers to obtain consent from consumers prior to deploying facial recognition services in physical premises open to the public. The placement of a conspicuous notice in physical premises that clearly conveys that facial recognition services are being used would constitute a consumer’s consent to the use of such facial recognition services when that consumer enters those premises that have such notice.
Under S. 847, obtaining affirmative consent would be effective only if a business makes available to a person a notice that describes the specific practices of the business in terms that persons are able to understand regarding the collection, storage, and use of facial recognition data. These include reasonably foreseeable purposes, or examples, for which the business collects and shares information derived from facial recognition technology or uses facial recognition technology, and information about the practice of data retention and de-identification, and whether a person can review, correct, or delete information derived from facial recognition technology. Under SB 5376, processors that provide facial recognition services would be required to provide documentation that includes general information that explains the capabilities and limitations of face recognition technology in terms that customers and consumers can understand.
S. 847 would prohibit a business from knowingly using a facial recognition technology to discriminate against a person in violation of applicable federal or state law (presumably civil rights laws, consumer protection laws, and others), repurpose facial recognition data for a purpose that is different from those presented to the person, and share the facial recognition data with an unaffiliated third party without affirmative consent (separate from the opt-in affirmative consent noted above). SB 5376 would prohibit processors of face data that provide facial recognition services from using such facial recognition services by controllers to unlawfully discriminate under federal or state law against individual consumers or groups of consumers.
S. 847 would require meaningful human review prior to making any final decision based on the output of facial recognition technology if the final decision may result in a reasonably foreseeable and material physical or financial harm to an end user or may be unexpected or highly offensive to a reasonable person. SB 5376 would require controllers that use facial recognition for profiling must employ meaningful human review prior to making final decisions based on such profiling where such final decisions produce legal effects concerning consumers or similarly significant effects concerning consumers. Decisions producing legal effects or similarly significant effects include, but are not limited to, denial of consequential services or support, such as financial and lending services, housing, insurance, education enrollment, criminal justice, employment opportunities, and health care services.
S. 847 would require a regulated business that makes a facial recognition technology available as an online service to make available an application programming interface (API) to enable an independent testing company to conduct reasonable tests of the facial recognition technology for accuracy and bias. SB 5376 would require providers of commercial facial recognition services that make their technology available as an online service for developers and customers to use in their own scenarios must make available an API or other technical capability, chosen by the provider, to enable third parties that are legitimately engaged in independent testing to conduct reasonable tests of those facial recognition services for accuracy and unfair bias.
S. 847 would provide exceptions for certain facial recognition technology uses, including product or service designed for personal file management or photo or video sorting or storage, if the facial recognition technology is not used for unique personal identification of a specific individual, as well as uses involving the identification of public figures for journalistic media created for public interest. The law would also provide exceptions for the identification of public figures in copyrighted material for theatrical release, or use in an emergency involving imminent danger or risk of death or serious physical injury to an individual. The law would also provide certain exceptions for certain security applications. Even so, the noted exceptions would not permit businesses to conduct the mass scanning of faces in spaces where persons do not have a reasonable expectation that facial recognition technology is being used on them.
SB 5376 would provide exceptions in the case of complying with federal, state, or local laws, rules, or regulations, or with a civil, criminal, or regulatory inquiry, investigation, subpoena, or summons by federal, state, local, or other governmental authorities. The law would also provide exemptions to cooperate with law enforcement agencies concerning conduct or activity that the controller or processor reasonably and in good faith believes may violate federal, state, or local law, or to investigate, exercise, or defend legal claims, or prevent or detect identity theft, fraud, or other criminal activity or verify identities. Other exceptions or exemptions are also provided.
Under S. 847, violating aspects of the law would be defined as an unfair or deceptive act or practice under Section 18(a)(1)(B) of the Federal Trade Commission Act (15 USC 57a(a)(1)(B)). The FTC would regulate the new law and would have authority to assert its penalty powers pursuant to 15 USC 41 et seq. Moreover, state attorneys general, or any other officer of a state who is authorized by the state to do so, may, upon notice to the FTC, bring a civil action on behalf of state residents if it believes that an interest of the residents has been or is being threatened or adversely affected by a practice by a business covered by the new law that violates one of the law’s prohibitions. The FTC may intervene in such civil action. SB 5376 would provide that the state’s attorney general may bring an action in the name of the state, or as parens patriae on behalf of persons residing in the state, to enforce the law.
S. 847 would also require the FTC to consult with the National Institute of Standards and Technology (NIST) to promulgate regulations within 180 days after enactment describing basic data security, minimization, and retention standards; defining action that are harmful and highly offensive; expanding the list of exceptions noted above in cases where it is impossible for a business to obtain affirmative consent from, or provide notice to, persons. S. 847 would not preempt tougher state laws covering facial recognition technology and the collection and use of face data, or other state or federal privacy and security laws.
In Part II of this post, facial recognition and face data regulation impact for businesses will be discussed.