Facial Recognition in the United States: Privacy Concerns and Legal Developments
The expansion of facial recognition technology (FRT) has become a prominent global issue. The European Union’s draft Artificial Intelligence Act proposes to restrict public FRT use, and the European Parliament uncovered its stance by calling for a ban on the technology.
Recent developments in the United Kingdom show the government’s commitment to providing guidance instead of over-regulating FRT—exemplified by the Information Commissioner’s Office’s papers discussing law enforcement and commercial FRT use, and the government’s National AI Strategy. Although the United Kingdom is looking to replace passports with FRT and will soon launch an app utilizing it, schools using FRT for student lunch payments seem to cross a line.
In other parts of the world, Australia is cracking down on FRT companies and there has been backlash in Russia over Moscow’s new FRT-based metro payment system.
Within the United States, numerous laws have been passed at the state and local levels to regulate FRT—yet looming tensions remain. FRT is expected to grow substantially in the coming years due to increased investing and the eagerness of entities to adopt it. At the same time, U.S. lawmakers and privacy advocates are challenging the technology’s proliferation by raising the consequences it can have on society—and calling for increased regulation.
As facial recognition becomes increasingly pervasive, privacy concerns are compounded—prompting reconsideration of whether current laws appropriately balance its benefits and harms.
Data Security
Faces are becoming easier to capture from remote distances and cheaper to collect and store. Unlike many other forms of data, faces cannot be encrypted. The current volume of data housed in various databases (e.g., driver’s licenses, mugshots, and social media) exacerbates the potential for harm because unauthorized parties can easily “plug and play” numerous data points to reveal a person’s life. Moreover, data breaches involving facial recognition data increase the potential for identity theft, stalking, and harassment because, unlike passwords and credit card information, faces cannot easily be changed.
Neither government nor commercial databases are immune to hacking. It’s been argued that security concerns are somewhat mitigated because FRT algorithms are specific to each vendor. However, many government databases use a single vendor. If configurations are standard across all systems, this means a breach of one could compromise all.
Individual privacy rights
“Faces … are central to our identity,” asserts Woody Hartzog. As such, it’s argued that people don’t have a meaningful choice to hide their face to avoid facial recognition. Regardless, FRT has developed to the point where individuals can still be identified when wearing a mask or blurring their face in photos.
Given the impracticality of avoiding FRT, privacy experts argue that omnipresent surveillance chills activities protected by the First Amendment to the U.S. Constitution, such as “free democratic participation and … political activism.” Further linking this to privacy, Margot Kaminski, associate professor at Colorado Law and Director of the Privacy Initiative at Silicon Flatirons, explains that “a [government] that’s capable of tracking your face wherever you are [is] capable of tracking your location wherever you are, which means it’s capable of tracking every association you have.”
Reduction of anonymity. When in public, most people expect their face to be recognized by a few people or businesses, “but fewer to connect a name to their face, and even fewer to associate their face with internet behavior, travel patterns, or other profiles,” according to analysis by the Center for Democracy and Technology. As anonymity decreases, consumers may hesitate to shop at or assemble in these places. Additional concerns arise where FRT identifies “not just who someone is, but whom they are with,” a U.S. Government Accountability Office report found.
Tracking. Facial recognition is unlike other tracking methods—such as carrying a mobile phone or wearing a Fitbit—because consumers cannot easily avoid unwanted tracking of their face. And while most consumers find it unacceptable to use FRT for commercial purposes, retailers continue to use the technology.
Lack of Transparency. Using FRT to identify individuals without their knowledge or consent raises privacy concerns, especially since biometrics are unique to an individual. Furthermore, it poses additional concerns because, unlike other biometrics (e.g., fingerprinting), facial scans can be captured easily, remotely, and in secret—like Clearview AI’s dataset created from billions of photos secretly scraped from social media and other websites without consent.
Misuse
Inaccuracy is a common critique of FRT, but this is a Catch-22: a less accurate system raises misidentification concerns, whereas a more accurate system leads to increased surveillance abilities. As with the adverse effects of other types of personal data being erroneously linked to an individual, a captured facial scan that misidentifies someone could have long-term consequences. Moreover, accuracy varies by demographic, with false positive rates being highest among women and people with darker complexions, and lowest among white men. In the criminal context, false positives have already turned into false arrests.
In the commercial context, concerns about using FRT to classify consumers by age, gender, and race have been expressed by Congress and privacy groups because classification paves the way for profiling, which could raise adverse consequences for certain demographics. Disparate treatment from profiling could be in the form of denying or limiting access to certain services, or price discrimination based on the generalizations made about a consumer.
The Regulatory Landscape
An assortment of laws has been enacted by U.S. state and local legislatures, and the amount of recent proposals seems to show there’s more FRT regulation on the way. Most recent laws and proposals have targeted regulating government entities, rather than the private sector. Some efforts focus primarily on law enforcement, while others regulate the entire public sector.
Law enforcement. Instead of banning law enforcement FRT use, some jurisdictions have enacted laws imposing government oversight. In Virginia and Pittsburgh, Pennsylvania, prior legislative approval is now required to deploy FRT. Before conducting a facial recognition search, Massachusetts and Utah require law enforcement to submit a written request to the state agency maintaining the database. Similar proposals have been made in Kentucky and Louisiana.
Judicial oversight is imposed in Massachusetts and Washington by requiring law enforcement to obtain a warrant or court order prior to using FRT. Officers in Maine must now meet a probable cause standard prior to making a FRT request, and are prohibited from using a facial recognition match as the sole basis for a search or arrest.
Data breaches involving facial recognition data increase the potential for identity theft, stalking, and harassment.
Additionally, some states have enacted narrow bans on the use of FRT in conjunction with police body cameras. Oregon and New Hampshire have prohibited law enforcement from using FRT on body camera footage, and California is in the middle of its three-year renewable ban. New Jersey, New York, and South Carolina have proposed similar bills.
The public sector. More broadly, some jurisdictions have enacted laws that restrict or ban all government entities—including law enforcement—from using FRT. In Washington, all government agencies that want to use it must first provide public notice, hold community meetings, and publish an accountability report. A New York bill would prohibit state agencies and contractors from retaining or sharing facial recognition images with third parties without prior court authorization.
In 2019, San Francisco was the first to ban all government use of FRT, with Oakland and Berkeley quickly following suit. Massachusetts cities—including Boston, Brookline, Cambridge, Northampton, Easthampton, and Somerville—have also banned government from using the technology. Additional jurisdictions with similar bans include: Jackson, Mississippi; King County, Washington; Madison, Wisconsin; New Orleans, Louisiana; Minneapolis, Minnesota; Portland, Maine; and Portland, Oregon.
Last year, Vermont became the first state to ban government FRT use. However, government officials may obtain a warrant to use it on drone footage, and facial recognition software that assists with child sexual exploitation and human trafficking is exempt.
Maine’s recent law has been touted as the strongest statewide regulation to date. The law prohibits government officials and employees from possessing and using FRT (with very narrow exceptions), including prohibitions on using FRT in public schools and to conduct surveillance of people in public. To ensure “technologies like Clearview AI will not be used by Maine’s government officials behind closed doors,” all government agencies are prohibited from buying, possessing, or using their own FRT.
Commercial FRT use
Biometric privacy laws. One approach that indirectly regulates commercial FRT use is to regulate the collection and use of biometric data. Illinois’s Biometric Information Privacy Act (BIPA) provides that private entities seeking to use consumers’ biometric information, including facial recognition, must first notify them of the collection. Disclosure of collected biometric data is prohibited without consent, and entities cannot profit from the data. By affording consumers a private right of action, BIPA allows them to hold companies like Clearview AI and Facebook accountable. Similar bills are pending in Massachusetts and New York.
Both Texas and Washington have biometric privacy laws with similar requirements to BIPA, but consumers in these states are not entitled to a private right of action.
Last July, New York City passed a biometric identifier information law prohibiting NYC businesses that “collect, retain, convert, store, or share biometric identifier information of customers” from profiting off the information; businesses must also disclose their FRT use to customers with a “clear and conspicuous sign.” The law provides a private right of action for customers, but it also includes a cure provision for businesses to remedy certain violations. For instance, if a business violates the law’s disclosure requirement, customers can notify the business of the alleged violation. The business then has 30 days to “cure” the violation before the customer can take legal action.
Data privacy laws. Another indirect approach can be seen in the handful of comprehensive data privacy laws recently passed that include facial recognition data in their scope. The only law currently in effect is the California Consumer Privacy Act (CCPA). It provides consumers certain rights related to their facial recognition data, such as the right to access, opt-out of the sale of, and delete their data. Supplementing the CCPA, the California Privacy Rights Act (effective Jan. 2023) allows consumers to limit a business’ use and disclosure of their collected data. Colorado’s privacy law (effective July 2023) requires businesses to obtain consent prior to processing consumers’ facial recognition data, which falls under the law’s definition of “sensitive data.” Unlike California and Colorado, Virginia’s Consumer Data Protection Act (effective Jan. 2023) excludes facial recognition.
Direct FRT regulations. Currently, only two jurisdictions—Portland, Oregon, and Baltimore, Maryland—directly regulate the commercial use of FRT. Portland prohibits private entities from using FRT in “places of public accommodation.” A Massachusetts bill would enact a similar ban, if passed. As of August 2021, no Baltimore resident or corporation can use FRT or information obtained from such technology. Baltimore’s ban is set to expire in December 2022, unless extended by the city council.
Federal Legislation Lacking
Although Congress has yet to pass federal facial recognition regulation, there are proposals to regulate government FRT use.
Reintroduced in June, the Facial Recognition and Biometric Technology Moratorium Act of 2021 would prohibit federal government FRT use and “effectively strip federal support for state and local law enforcement entities” using the technology. The bill would also provide a private right of action.
The Fourth Amendment Is Not for Sale Act would prohibit law enforcement from purchasing personal data without a warrant and would fully prevent law enforcement and intelligence agencies from buying “illegitimately obtained” data. As written, the bill prevents the government from buying data from Clearview AI.
Introduced in October, the Government Ownership and Oversight of Data in Artificial Intelligence Act proposes to ensure federal contractors use the data collected through AI, including “data from facial recognition scans,” in a manner that “does not compromise the privacy and rights of Americans.”
Both advocacy groups and FRT companies have been urging Congress to act, and some companies have taken steps on their own in the absence of governmental action. In 2020—responding in part to the protests following George Floyd’s murder—Microsoft and Amazon enacted moratoriums on FRT sales to law enforcement for Congress to get up to speed. At the same time, IBM announced it would no longer offer the technology due to racial profiling and mass surveillance concerns.
While FRT has many potential benefits, it also brings significant privacy concerns. It looks as though the regulatory road forward in this booming area will be focused on ensuring that adequate safeguards are in place to prevent abuse of FRT and protect privacy, but only time will tell.
Taylor Kay Lively is a Westin Fellow at the International Association of Privacy Professionals. She recently graduated from the University of Colorado School of Law, where she served as production editor of the Colorado Law Review. While in law school, Taylor Kay focused on privacy law by researching and writing about a variety of privacy issues, including online privacy policy notice failures and the privacy implications of social media discovery in workers’ compensation claims.
Prior to law school, Taylor Kay earned her undergraduate degree in political science and sociology from Virginia Tech, which sparked her interest in the interplay between social norms and privacy developments.