Skip to content
Menu
menu

Photo by iStock

Facial Recognition Technology: The Good, the Bad, and the Future

1221-Avanti-Bakane-headshot.jpgThe demands of a COVID-19 world are causing developers of facial recognition technology to balance innovation with growing privacy concerns.

In some sectors, the pandemic has offered opportunities to refine facial recognition technology and created new potential use cases. For example, the U.S. Department of Homeland Security is in talks to deploy new algorithms to identify subjects wearing masks. A few years ago, the San Jose Airport deployed facial recognition technology to verify passport photos with its holders, easing verification lines and effectively automating the process.

Other industries are seeing benefits from facial recognition technology as well. Take the banking industry’s City Bank of Florida and JPMorgan Chase, for example. Long gone are the days of forgotten and compromised passwords and passcodes. Logging into your bank account using a biometric scan of your face now takes a few seconds. Utilizing facial recognition technology not only increases expediency, but provides additional security. Prior to facial recognition software, if a theft occurred investigations were slowed down because they would have to wait for the card to be utilized. Now with facial recognition technology, when theft occurs at an ATM the bank has a real-time photo of the theft occurring and can locate the suspect before potential damage to the victim occurs.

But facial recognition technology is not without faults. One such glaring fault is the bias the system has for people of color. In a study conducted by two MIT students called Gender Shades the facial recognition systems analyzed were better at detecting light skinned males; women and people of color contained more error rates.

With facial recognition software skewed against people of color, utilizing such technology in police investigations should send up red flags. With no U.S. federal regulations in place to limit or standardized the use of such facial recognition technology, the risk of law enforcement using technology that is biased to a large subsect of the human population can have massive ethical ramifications. When facial recognition software is utilized, there must be a human component to ensure accuracy.

In June 2021, the U.S. Senator Edward Markey (D-MA) introduced the Facial Recognition and Biometric Technology Moratorium Act of 2021. This bill would impose limits on the use of biometric surveillance systems by U.S. federal and state government entities, including imposing a blanket ban on most facial recognition technologies used by federal, state, and local authorities today. The bill is currently with the Senate Judiciary Committee; a related bill was introduced in the U.S. House of Representatives (H.R. 3907).

Varying Approaches to Limitations

With the potential scrutiny facial recognition technology brings, city leaders have taken a cautious approach to the technology—even limiting its uses for specific purposes.


Utilizing facial recognition technology not only increases expediency, but provides additional security.


While many city leaders have imposed limitations on facial recognition technology, Portland leaders have taken a drastic approach, making the city the first in the United States to ban certain use of this technology by private businesses. The ban prohibits private entities from using facial recognition technology in places of public accommodations, including hotels, restaurants, retail stores, and public gathering locations. This includes businesses that serve the public, such as grocery stores and restaurants.

Portland joins other cities like San Francisco, Oakland, and Boston, to outlaw use of surveillance technology.

An Identity’s Worth

The inherent value of facial recognition data also makes it an attractive target for data theft and cybersecurity incidents. Like with many industries with high-value data, the incentives for malicious cyber threat actors become clear with the reward for attack greatly outweighing the risks.

One of the largest motives for attackers in the healthcare sector is the resale of health information on the Dark Web—an average person’s identity, or “Fullz,” is “worth” $1,170 on the dark web. Specifically, data associated with facial recognition may include sensitive images, as well as “metadata” associated with the images, including name, address, height, and/or weight. Compared to credit card information, which can be easily replaced and closely monitored for suspicious activity, this type of health information is permanent and thus, valued at a price 10 times higher than credit card information, selling at about $360-1,000 on the black market. The threat actor can also utilize such information for medical identity theft, larger phishing or scamming schemes, and financial fraud.

Additionally, the time-sensitivity of inaccessible healthcare information makes it a prime target for high-price ransomware attacks. Given that organizations working with facial recognition are working in critical industries like health and law enforcement, these organizations are at an elevated level of threat to ransomware attacks that could temporarily halt operations and carry grave recuperative costs.

In fact, the healthcare industry is the most targeted sector for ransomware attacks; since downtime and data exfiltration in the healthcare industry can be detrimental to patient outcomes, companies are more likely to pay. Additionally, as a result of COVID-19, the healthcare industry saw ransomware attacks double between July and September 2020 as virtual healthcare became the norm.

In the bigger picture, the FBI’s Internet Crime Complaint Center has thus far reported 2,084 total ransomware complaints from 1 January 2021 to 31 July 2021, representing a 62 percent year-over-year increase from 2020. These types of ransomware attacks have a disproportionately greater impact for smaller companies, and for a healthcare company working in AI (often associated with facial recognition), a ransomware or cyber-attack can ultimately postpone a clinical trial and cost millions of dollars as a result of the delay. Specifically, delays in clinical trials are valued at a loss of $0.6-8 million in future revenue per day, and reputational damage can have lasting business impacts beyond the breach.

A data breach of a facial recognition technology provider also has wide-reaching effects on the personal privacy of citizens. For instance, in February 2020, Clearview Artificial Intelligence (AI) announced a data breach on its internal systems, which exposed its client list—including several law enforcement agencies. While Clearview AI denied that the adversaries had access to the more than 3 billion photos in its database, the potential impacts of intrusion of facial recognition companies are important to consider. Once inside the corporate network, privilege escalation can allow adversaries to gain access to domain accounts and images can be altered without authorization. This has been seen before in a seemingly “doomsday-esque” example, where researchers deployed malware to gain the ability to add tumors to CT or MRI scans inside of health clinics.

Given the risks facing the companies positioned in the facial recognition sector, maintaining the privacy of citizens’ data and the security of artificial intelligence software utilized is a bare minimum. As technology rapidly advances, this balance is proving exceedingly difficult to achieve.


The inherent value of facial recognition data also makes it an attractive target for data theft and cybersecurity incidents.


For instance, DTC Genetic Testing companies also work with law enforcement, but are not subject to regulation under the Health Insurance Portability and Accountability Act (HIPAA) despite the fact that they possess and sell sensitive personal information to third parties. While genetic code is arguably the most unique and sensitive information, the level of security required from our private organizations does not parallel. Furthermore, HIPAA does not require encryption of data during the transmission of data from one party to another (third-party), which places private user information at even greater risk. “Check the box” security requirements will no longer suffice.

Balancing Steps

A prioritization of asset-based cybersecurity of facial recognition companies is critical. The sensitivity of data assets associated with organizations working in artificial intelligence related to digital health, as well as the prolonged reputational and regulatory impacts that follow are X factors that reinforce this idea. As demonstrated above, the existing regulatory feedback mechanisms that are currently in place only represent the bare minimum for organizations to follow, particularly for agile organizations advancing in the artificial intelligence space. 

This sector must focus efforts on bolstering security beyond compliance—through continual partnership between private cybersecurity companies, users of facial recognition data, and security engineers—to stay on the leading edge. When we achieve this, security can be harnessed as a competitive advantage for artificial intelligence companies in the facial recognition space and beyond.

Avanti D. Bakane is a partner and co-chair of the Cyber, Privacy & Data Security group at Gordon & Rees Scully Mansukhani. Her practice consists of representing businesses and creative professionals in software and e-commerce development, data privacy, data loss licensing, and copyright infringement disputes. A Certified Information Privacy Professional (CIPP), she maintains a leadership role with the Cyber Security, Data Privacy, and Technology Committee of the International Association of Defense Counsel (IADC). Bakane can be reached at [email protected].

Margaret Martin is an attorney licensed in Illinois and Washington, working in the data privacy and protection industry for a major telecommunications company.

Kyla Guru is the founder and CEO of Bits N' Bytes Cybersecurity Education, a 501(c)(3) dedicated to educating and equipping all vulnerable populations with the cybersecurity education and awareness needed to face our future of advanced cyberthreats. She is an undergraduate student at Stanford University, studying computer science and international relations.

arrow_upward