Skip to content
Menu
menu

Illustration by iStock

Facial Recognition Freeze Frame

The facial recognition market is expected to grow to $8.5 billion by 2025—a major jump from $3.8 billion in 2020, according to analysis by Deloitte.

Where that growth occurs, however, may be derailed by the continuously evolving legal and regulatory landscape. Following analysis of algorithms and increasing pressure from privacy and civil liberties advocates, regulatory bodies around the world are considering prohibitions or moratoriums on facial recognition technology use—especially by law enforcement and in public places.           

China’s new Personal Information Protection Law (PIPL) went into effect in November 2021, limiting the use of facial recognition technology as a means of protecting personal biometric information. Members of the European Parliament adopted in October a resolution demanding safeguards when artificial intelligence tools—including facial recognition—are used by law enforcement. As of Security Technology’s press time, the resolution was being considered for formal passage in December 2021. And the U.S. Congress is considering many similar measures.

There are security benefits to using facial recognition technology. But without proper safeguards in place, it can be used to harm individuals. To help establish these safeguards, the World Economic Forum (WEF) worked with partner agencies to create a global framework for responsible limits on facial recognition.

“Law enforcement agencies could benefit greatly from these technologies to resolve crimes and conduct faster investigations,” according to the WEF. “But, improperly implemented or implemented without due consideration for its ramifications, facial recognition could result in major abuses of human rights and harm citizens, particularly those in underserved communities.”

How and Where Facial Recognition is Used

Face detection is the ability of software to detect that an object in a field of view is a human’s face and is commonly used in surveillance applications. Facial recognition technology (FRT) builds upon the face detection capability to determine who the detected face belongs to. To do this, FRTs use numerical representations of face detections called feature vectors or embeddings.

“This vector, which is unique to each individual, is what allows systems to perform searches,” according to Biometric & Behavioural Mass Surveillance in EU Member States, a report for the Greens/EFA in the European Parliament published in October 2021. “The detected vector can, for example, be used to search for existing identical vectors in a database of known individuals, where vectors are related to an identity.”


Law enforcement agencies in 11 of 27 EU member states are using biometric recognition systems in their investigations—particularly FRT.


“In a different type of usage, the feature vector can be used to track people moving from one camera’s field of view to the next,” the report continued. “In this case, the vector is not used to find a ‘match’ in a database but serves instead to confirm that it is the same individual that appears in different camera feeds.”

FRT offers a variety of security-focused benefits and is traditionally deployed to perform two functions: cooperative searches for verification or authentication and non-cooperative searches to identify a data subject.

“Live facial recognition is currently the most controversial deployment of FRT: Live video feeds are used to generate snapshots of individuals and then match them against a data-base of known individuals—the ‘watchlist,’” according to the Greens report.

Law enforcement agencies in 11 of 27 EU member states are using biometric recognition systems in their investigations—particularly FRT. The Greens report found that police in Austria, Finland, France, Germany, Greece, Hungary, Italy, Latvia, Lithuania, Slovenia, and The Netherlands are all using FRT to conduct “ex-post identification” in their studies—FRT for forensic investigation purposes. Croatia, Cyprus, Czechia, Estonia, Portugal, Romania, Spain, and Sweden are also exploring using the technology.

“The typical scenario for the use of such technologies is to match the photograph of a suspect (extracted, for example, from previous records or from CCTV footage) against an existing dataset of known individuals (e.g., a national biometric database, a driver’s license database, etc.,” the Greens report explained. “The development of these forensic authentication capabilities is particularly relevant to this study because it entails making large databases ready for searches on the basis of biometric information.”

For instance, Germany has been using FRT to identify criminal activity with its INPOL system, a central criminal information system, since 2008. The system relies on Oracle software and contains an individual’s name, aliases, date and place of birth, nationality, fingerprints, mugshots, appearance, information about criminal histories, and DNA information. It also includes 5.8 million facial images of convicts, arrestees, suspects, and missing persons to match them with those seen in video surveillance images.

“The facial recognition system compares templates and lists all the matches ordered by degree of accordance,” according to the Greens report. The police “has specific personnel visually analysing the system’s choices and providing an assessment, defining the probability of identifying a person. This assessment can be used in a court of law if necessary.”

[ Stay Aware of Threats. SM7 Newsletter: Sign Up ]

Additionally, INTERPOL and Europol have facial recognition systems they use for investigations. The INTERPOL Face Recognition System (IFRS) has facial images from more than 179 countries that—when combined with its automated biometric software application—can identify or verify people by comparing and analyzing patterns, shapes, and proportions of their facial features and contours.

Similar to the German system, facial images are entered into IFRS, encoded by an algorithm, and compared to profiles already in the system to create a “candidate” list of the most likely matches, according to INTERPOL.

“We always carry out a manual process—we call this Face Identification—to verify the results of the automated system,” according to an INTERPOL fact sheet. “Qualified and experienced INTERPOL officers examine the images carefully to find unique characteristics that can lead to one of the following results: Potential candidate, No candidate, or Inconclusive.”


We always carry out a manual process—we call this Face Identification—to verify the results of the automated system.


That designation is then passed on to the country that provided the original facial image. INTERPOL member countries can also request a search of the IFRS system to check a person of interest at an airport or a border crossing, for instance.

In contrast, some countries and municipalities are using facial recognition tools or similar technologies as part of a surveillance strategy. For instance, in Amsterdam, The Netherlands, the city is using a digital perimeter surveillance system around the Johan Cruijff ArenA that utilizes facial recognition technology, crowd monitoring, and object detection—including weapons, fireworks, or drones.

In Marbella, Spain, regional laws prohibit the use of facial and biometric identification without consent. So, the city of Marbella has deployed an Avigilon smart camera system that uses “appearance search,” that estimates for “unique facial traits, the colour of a person’s clothes, age, shape, gender, and hair color,” according to the Greens report.

“This information is not considered biometric. The individual’s features can be used to search for suspects fitting a particular profile,” the report explained. “Similar technology has been deployed in Kortrijk (Belgium), which provides search parameters for people, vehicles, and animals.”

Initial Regulatory Steps in Europe

In 2018, the European Commission published the Artificial Intelligence for Europe, which called for a joint legal framework to regulate AI-related services. The commission also adopted a Coordinated Plan on Artificial Intelligence that had similar goals.

A year later, the Council of Europe Commissioner for Human Rights released recommendations on artificial intelligence—including steps for member state authorities to reduce the risk of misuse. The EU’s High Level Expert Group on Artificial Intelligence also adopted the Ethics Guidelines for Trustworthy Artificial Intelligence to bring EU strategy in line with AI ethical standards.

In 2021, the Council of Europe adopted Guidelines on Facial Recognition to create a moratorium on live FRTs and outlined conditions under which law enforcement authorities can use FRT. In April 2021, the European Commission released its Artificial Intelligence Act that placed restrictions on FRT use. For instance, FRT systems were not allowed to be used in public places by police except for in response to “serious crimes,” such as terrorism.

The EU’s decision prompted backlash, including a letter drafted by Access Now, Amnesty International, European Digital Rights, Human Rights Watch, Internet Freedom Foundation, and the Instituto Brasileiro de Defesa do Consumidor, and signed by 170 supporting organizations in 55 countries arguing that its approach created a loophole that would allow for the abuse of human rights and create more mass surveillance.

“These uses of facial and remote biometric recognition technologies, by design, threaten people’s rights and have already caused significant harm,” according to the letter. “No technical or legal safeguards could ever fully eliminate the threat they pose, and we therefore believe they should never be allowed in public or publicly accessible spaces, either by governments or by the private sector.”

Shortly afterwards, the United Nations Office of the High Commissioner for Human Rights released a report on privacy in the digital age. It recommended that governments stop their use of remote biometric recognition, including FRT, in public spaces in real-time until they could demonstrate that there were no accuracy issues or discriminatory effects.

In response, the European Parliament voted in October 2021 in favor of a resolution that bans the use of FRTs by law enforcement in public spaces, as well as banning the use of private facial recognition databases—like Clearview AI—and predictive policing based on behavioral data.

“Fundamental rights are unconditional,” said Petar Vitanov, European Parliament member representing Bulgaria, in a statement. “For the first time ever, we are calling for a moratorium on the deployment of facial recognition systems for law enforcement purposes, as the technology has proven to be ineffective and often leads to discriminatory results. We are clearly opposed to predictive policing based on the use of AI, as well as any processing of biometric data that leads to mass surveillance.”

While not binding itself, the resolution is expected to be adopted into the Artificial Intelligence Act that the European Parliament is amending and will likely vote on later in December.

Outside Action

Observing the activity in Europe, as well as the privacy debates and bans on FRT in the United States, the WEF decided in 2019 to partner with INTERPOL, the Centre for Artificial Intelligence and Robotics of the United Nations Interregional Crime and Justice Research Institute (UNICRI), and The Netherlands police to create a governance framework for law enforcement use of FRT.

“The ambition of this work is to support law- and policymakers across the globe to design an actionable governance framework that addresses key policy considerations in terms of the prevention of untargeted surveillance, the necessity of a specific purpose, the performance assessment of authorized solutions, the procurement processes for law enforcement agencies, the training of professional forensic examiners, and the maintenance of the chain of command for emergency situations,” according to the framework.

The framework, released in October 2021, is centered around two components: a set of principles that defines responsible use of facial recognition by law enforcement for investigations, and a self-assessment questionnaire that details requirements law enforcement agencies must respect to maintain compliance with the principles for action.

Through its work with stakeholders, the working group crafted shared principles for the responsible use of FRT by law enforcement. These include:

  1. Respect for human and fundamental rights
  2. Necessary and proportional use
  3. Transparency
  4. Human oversight and accountability
  5. System performance
  6. Risk-mitigation strategies
  7. Training of facial examiners
  8. Use of probe images and reference databases
  9. Image and metadata integrity


In January 2022, The Netherlands police force will begin the pilot phase of testing the governance framework to see if it is achievable, relevant, and usable. The WEF is encouraging other law enforcement agencies to also participate in pilot testing, which it will then use to update the framework and craft a final version.

“The rapid deployment of facial recognition technology for law enforcement investigations around the world is arguably among the most sensitive use cases because of the potentially disastrous effects of system errors or misuses in this domain,” according to the framework. “Therefore, there is a pressing need to design and implement a robust governance framework to mitigate these risks.”

The WEF did not return requests for comment for this article.

Megan Gates is editor-in-chief of Security Technology. Connect with her at [email protected]. Follow her on Twitter: @mgngates.

arrow_upward