On October 31, 2019, Elizabeth Denham, the UK’s Information Commissioner issued an Opinion and an accompanying blog urging police forces to slow down adoption of live facial recognition technology and take steps to justify its use. The Commissioner calls on the UK government to introduce a statutory binding code of practice on the use of biometric technology such as live facial recognition technology. The Commissioner also announced that the ICO is separately investigating the use of facial recognition by private sector organizations, and will be reporting on those findings in due course.
The Opinion follows the ICO’s investigation into the use of live facial recognition technology in trials conducted by the Metropolitan Police Service (MPS) and South Wales Police (SWP). The ICO’s investigation was triggered by the recent UK High Court decision in R (Bridges) v The Chief Constable of South Wales (see our previous blog post here), where the court held that the use of facial recognition technology by the South Wales Police Force (“SWP”) was lawful.
The ICO had intervened in the case. In the Opinion, the Commissioner notes that, in some areas, the High Court did not agree with the Commissioner’s submissions. The Opinion states that the Commissioner respects and acknowledges the decision of the High Court, but does not consider that the decision should be seen as a blanket authorization to use live facial recognition in all circumstances.
Key highlights from the Opinion include the following:
- The use of live facial recognition for law enforcement purposes constitutes “sensitive processing” as it involves processing biometric data for the purpose of uniquely identifying an individual.
- Such sensitive processing relates to all facial images captured and analyzed by the software, irrespective of whether that image yields a match to a person on a watchlist or the biometric data of unmatched persons is subsequently deleted within a short period of time.
- Controllers must identify their lawful basis for processing and articulate why the strict necessity threshold for sensitive processing for law enforcement purposes is met before the processing starts, by means of a data protection impact assessment (“DPIA”) and appropriate policy document (as detailed under the DPA 2018).
- The inclusion of an image on a watchlist should meet the strict necessity threshold for processing, and watchlists are expected to be limited in size and only include images that are accurate.
- In order to mitigate the risk of bias against gender or ethnic groups, agencies should complete an Equality Impact Assessment and regularly review against legal developments.
This increased scrutiny of facial recognition technology and public sector use of algorithmic systems is a trend across Europe. Other relevant developments include the following:
- The French data protection authority (the CNIL) announced on October 29, 2019 that plans to start trialling facial recognition tools at schools in southern France do not comply with data protection laws, weeks after the Swedish regulator fined a school over a similar scheme.
- The European Data Protection Board issued in July 2019 guidelines on processing personal data through video devices, which discussed in detail the data protection implications of using of facial recognition technology.
- The UK Government also issued in June 2019 a guide to using AI in the public sector (see our previous blog post here).
- In June 2019, an expert committee set up by the Council of Europe also issued for consultation a set of draft recommendations on public sector use of AI and algorithmic systems.
- Facial recognition technology was also called out as an example of “critical concerns raised by AI” in the European Commission’s High Level Expert Group’s Ethics Guidelines on AI from April 2019 (see our previous blog post here).