When people think about facial recognition technology (“FRT”), they immediately imagine the use of their faces to unlock their smartphones. But this technology is far more complicated, useful and potentially dangerous.
First, it is important to understand the difference among “facial detection”, “facial characterization”, “facial identification” and “facial verification”. Such terms have been defined by the non-profit organization Future of Privacy Forum (https://fpf.org/wp-content/uploads/2019/03/Final-Privacy-Principles-Edits-1.pdf) as follows:
- Facial detection simply distinguishes the presence of a human face and/or facial characteristics without creating or deriving a facial template.
- In facial characterization the system uses an automated or semi-automated process to discern a data subject’s general demographic information or emotional state, without creating a unique identifier tracked over time.
- Facial Identification is also known as “one-to-many” matching because it searches a database for a reference matching a submitted facial template and returns a corresponding identity.
- The last one, facial verification, is called “one-to-one” verification because it confirms an individual’s claimed identity by comparing the template generated from a submitted facial image with a specific known template generated from a previously enrolled facial image.
There are many possible uses of facial recognition. In the private sector FRT may be used to keep track of employees’ time and attendance, identify shoppers’ patterns inside stores, implement smart homes, etc. In the public sector, FRT may be used to monitor protests, identify suspects in security footage, check claimed identities at borders, etc.
This relatively new technology brings, besides a wide range of possible implementations, significant concerns regarding privacy, accuracy, race and gender disparities, data storage and security, misuse. For instance, depending on the quality of images compared, people may be falsely identified. In addition to that, in its current state, FRT is less accurate when identifying women compared to men, young people compared to older people, people of color compared to white people. Privacy is certainly another concern: without strong policies it is unclear how long these images might be stored, who might gain access to them or what they can be used for; not to mention that this technology makes far easier for government entities to surveil citizens and potentially intrude into their lives (see “Early Thought & Recommendations Regarding Face Recognition Technology”, First report of the AXON AI and policing technology Ethics Board https://www.policingproject.org/axon-fr).
Once the possible implementations and the related risks are understood, the worldwide lack of regulation becomes even more surprising.
Within the European Union, the General Data Protection Regulation obviously applies to FRT. Furthermore, “Guidelines on Facial Recognition” have been released on January 28, 2021 by the Consultative Committee of the Council of Europe with regard to automatic processing of personal data (https://rm.coe.int/guidelines-on-facial-recognition/1680a134f3). This latter document includes:
- Guidelines for legislators and decision-makers;
- Guidelines for developers, manufacturers and service providers;
- Guidelines for entities using FRT;
- Rights of data subject.
When it comes to Italy, particular attention has been drawn by several decisions of the Italian Data Protection Authority on the topic. Recognizing the innovative potential of FRT as well as its riskiness for individual rights, the Authority adopted a more permissive approach regarding the private sector’s use of FRT, while issuing stricter decisions with regard to the use of FRT by public authorities. For instance, the Authority allowed the use of FRT by police forces for purposes of identifying individuals among archived images, but prohibited real-time surveillance using the same technology (see https://www.garanteprivacy.it/web/guest/home/docweb/-/docweb-display/docweb/9040256 and https://www.garanteprivacy.it/web/guest/home/docweb/-/docweb-display/docweb/9575877). On the other hand, the Authority allowed one airport to implement FRT for purposes of improving efficiency in the management of the flow of passengers, so long as images of individuals were not stored (see https://www.garanteprivacy.it/web/guest/home/docweb/-/docweb-display/docweb/8789277).
The European Commission, in light of the complexity of the situation and the necessity of a strong and harmonised legislative action, presented on April 21, 2021 its “Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence” (https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206). This Proposal was already the subject, on June 18, 2021, of a EDPB and EDPSs’ joint-opinion (https://edpb.europa.eu/our-work-tools/our-documents/edpbedps-joint-opinion/edpb-edps-joint-opinion-52021-proposal_en), in which they called for a general ban on the use of FRT for:
- Automated recognition of human features in publicly accessible spaces;
- Categorization of individuals into clusters according to ethnicity, gender, etc., based on biometric features;
- Inference of individuals’ emotions.
What the European Commission is doing is an example of a more globally widespread legislators’ attitude towards artificial intelligence in general and FRT in particular. These technologies are more and more in our lives and are constantly evolving. Consequently, there is an increasing request, both from public and private subjects, for clear rules to govern this new technology and ensure that individual rights are safeguarded. Hopefully in the next months/years the situation will become clearer.
Flavio Monfrini / Michele Galluccio