Municipalities Frown at Facial Recognition Over Privacy, Justice Concerns
The Detroit Police Department was looking for the thief who stole roughly $3,800 worth of luxury goods from a Shinola store in October 2018. All they had to go on was some surveillance footage of an unidentified Black man in a St. Louis Cardinals hat.
After a five-month delay, the police department ran the footage through its facial recognition software and received a match: Robert Julian-Borchak Williams of Farmington Hills, Michigan. A security officer who worked at the Shinola store watched the same footage and then chose Williams out of a photo line-up.
Convinced by the evidence collected, the police called Williams and asked him to report to a Detroit police station to be arrested. Williams thought it was a prank and asked for additional clarification, after which the police said they would arrest him at his work. Williams instead told the police he was headed home, so they decided to meet him there.
The officers arrested Williams in January 2019 in front of his family, took him to a police station, and held him for 30 hours before Williams made bail—despite his insistence that he was innocent. Later at a court hearing, county prosecutors dropped the case against Williams without prejudice.
It would eventually come to light—after Williams’ attorney made a records request—that the facial recognition software misidentified Williams as the thief. The American Civil Liberties Union (ACLU) filed a complaint against the Detroit Police Department to prohibit it from using the same facial recognition software in future investigations.
After the false arrest, the Detroit Police Department changed its policy to allow only still photos—not video footage—to be used for facial recognition, and only for violent crimes investigations.
“Facial recognition software is an investigation tool that is used to generate leads only,” said Detroit Police Department Sergeant Nicole Kirkwood in a statement to NPR. “Additional investigative work, corroborating evidence, and probable cause are required before an arrest can be made.”
While Williams’ arrest made headlines and caught the attention of the ACLU, some instances where facial recognition misidentifies an individual do not. Concerns have also been raised by privacy and racial justice advocates that the technology can be used to profile or harm innocent individuals.
Over the summer of 2020, in the midst of mass protests about racial inequality and police use of excessive force in the United States, several facial recognition technology manufacturers, including IBM, Microsoft, and Amazon, said they would pause the development and sale of their products.
And some municipalities even took the step of banning the use of facial recognition technology. First in San Francisco, California, where residents prohibited law enforcement’s use of the technology, and then in Portland, Oregon, where local officials passed the first ban on private-sector use of facial recognition technology in public accommodations.
“We must protect the privacy of Portland’s residents and visitors, first and foremost. These ordinances are necessary until we see more responsible development of technologies that do not discriminate against Black, indigenous, and other people of color,” said Portland Mayor Tom Wheeler in a press release. “Until now, the City of Portland has not had comprehensive privacy policies in place to ensure that the use of a technology like face recognition does not harm the civil rights and civil liberties of Portlanders, and the use of flawed and biased technologies can create devastating impacts on individuals and families.”
Portland’s Approach
In 2017 as the smart cities concept was gaining attention, the City of Portland decided it needed to take a closer look at how municipalities deal with emerging technology and protecting privacy.
That move set up Smart City PDX, a coordinated effort between community partners and the city to assess how data and technology can be used to improve people’s lives. Smart City PDX works closely with Portland’s Office of Equity and Human Rights, and the city takes the firm stance that privacy is a human right.
“Given the history of how technology has been used against communities of color, privacy is a human right,” says Judith Mowry, Office of Equity and Human Rights (OEHR), in an interview with Security Management. “And we’ve adopted the value of being antiracist—everyone is working through an equity lens.”
With that mind-set, Smart City PDX and OEHR began looking at how facial recognition technology is used and the impacts it can have on individuals. They hosted work sessions in early 2020 with representatives from Portland’s Police Bureau, Oregon and Northern California ACLU, Urban League, Portland Immigrants Rights Coalition, and the Portland Business Alliance.
Portland Police Bureau Assistant Chief Ryan Lee attended a work session in January 2020 and said that Portland’s police force was not using—or seeking to use—facial recognition technology. Instead, the bureau was interested in continuing a discussion with residents about what the path forward would be to acquire the technology in the future and recommended a moratorium on its use in the meantime.
“Police agencies who have lost public trust utilizing [facial recognition technology] failed to follow the U.S. Department of Justice Bureau of Justice Assistance’s recommendations,” Lee said. “Specifically, they did not seek public input from the onset, they did not use quality algorithms, they failed to train and/or require certification standards, they did not establish policies, framework, or oversight, and they had no mechanisms for maintenance of their programs that involved community oversight.”
Other work session participants said that since facial recognition technology has a consistent pattern of misidentifying individuals from communities of color, including these individuals in conversations about the use of the technology in the city, especially by the private sector, in the future would be critical.
For instance, Mowry says that one discussion point in the meetings was how a gas station was using facial recognition technology to control access. Individuals who wanted to enter the gas station would have to consent to having their face scanned, be identified by the gas station’s system, and then the door would unlock to allow them in.
“We had real concerns about how this could impact Portlanders of color,” Mowry adds.
Two organizations—Freedom to Thrive and the Portland Immigrant Rights Coalition—said facial recognition technology is a form of “digitized racial profiling,” according to a press release. “They suggested that [facial recognition technology] weaponizes racism through technology. These organizations supported the formation of an advisory council for community input on facial recognition technology.”
In 2019, the U.S. National Institute of Standards and Technology (NIST) evaluated 189 software algorithms from 99 facial recognition technology vendors. It found that Asian and Black faces resulted in higher false positive rates than Caucasian faces. The highest false positive rate for one-to-many matching was for Black females. (See “Facial Recognition Error Rates Vary by Demographic,” Security Management, May 2020.)
Based on this input and analysis, and further empowered by the mass demonstrations to promote racial justice and equality over the summer of 2020, Portland Mayor Wheeler and City Commissioner Jo Ann Hardesty introduced two ordinances: one sought to put a moratorium on city use of facial recognition technology, and another proposed a ban on private-sector use of the technology in public accommodations.
The ordinances passed unanimously on 9 September 2020; the public-sector moratorium went into effect immediately, and the private-sector ban went into effect 1 January 2021.
“This is an exciting opportunity to set a national example by protecting the right of privacy of our community members—especially those most vulnerable and overpoliced,” Hardesty said in a press release.
The private-sector ban focuses on prohibiting the use of facial recognition technology in places of public accommodation, such as restaurants, hotels, theaters, doctors’ offices, pharmacies, retail stores, museums, libraries, private schools, and more. The phrase “public accommodations” was adopted from the U.S. Americans with Disabilities Act (ADA) to set the tone that Portland should be welcome for everyone, says Hector Dominguez Aguirre, open data coordinator for the City of Portland.
“Public spaces—those do not include industrial or office spaces in a labor context,” Dominguez Aguirre explains. “We drafted the policy so if there is existing law or regulation that allows a private entity to use biometric data—such as banks for providing measures for protecting their assets from identity theft—they are already covered by the federal government.”
There are also exemptions to allow the use of facial recognition technology for private-sector and public-sector use for access to personal or employer-issued communication and electronic devices like smartphones or laptops, and for automatic face detection services in social media applications. Portland also allowed an exception for public-sector use of facial recognition technology to protect an individual’s privacy, such as to identify and blur faces from a video recording before it is released to the public. Violators are subject to a fine of up to $1,000 per day for each violation, and individuals have a private right of action for any damages sustained as a result of a violation.
Mowry says that so far, the city has not received any negative feedback from local stakeholders about the ordinances—which she credits to the extensive outreach done by Smart City PDX and OEHR.
“There were two concerns. One, Portland has a thriving tech sector and we wanted to let people know that we are tech friendly,” she explains. “Also, the idea of being able to find exceptions and continue the conversation. There will be the development of the process where folks can bring forward a technology and we can use our privacy values to do screening. As tech grows, there will be critical ethical questions about the line of what’s good for government, business, and individuals.”
Since the creation of Smart City PDX—and the passing of the ordinances—Dominguez Aguirre says that other U.S. cities have approached Portland for more information on its privacy principles as many municipalities have set up working groups on privacy and digital equity.
“Those networks are constantly sharing information and experiences, lessons learned, and getting a lot of benefit,” he adds. “We’ve really appreciated learning from other cities like Oakland, Seattle, San Francisco, and New York. They have done more work there, but we have the advantage that we can put together something updated and be further along.”
At the National Level
Like Portland, many U.S. cities are considering—and even implementing—their own moratoriums or bans on the use of facial recognition technology as the U.S. federal government continues to evaluate whether and how to regulate the technology.
Speaking at Politico’s Artificial Intelligence Summit in September 2020, U.S. Representative Pramila Jayapal (D-WA) said that there is real interest in Congress to move forward on regulating the use of facial recognition technology.
Both the congressional Freedom Caucus and Progressive Caucus share an interest in regulating the technology, Jayapal added, to define what the guardrails for use should be in both a government and private-sector context. The increasing number of local ordinances about facial recognition technology are building momentum for Congress to act.
“It is preferable to have national policies on many of these issues, especially on tech which crosses state boundaries,” Jayapal said. “Congress is much slower to act, so it often takes a movement that starts in cities and states and makes its way up to Congress.”
In the 116th Congress, Jayapal cosponsored legislation with U.S. Representative Ayanna Pressley (D-MA), U.S. Senator Ed Markey (D-MA), and U.S. Senator Jeff Merkley (D-OR) to halt the government use of biometric technology, including facial recognition tools.
“At a time when Americans are demanding that we address systemic racism in law enforcement, the use of facial recognition technology is a step in the wrong direction,” Markey said. “Studies show that this technology brings racial discrimination and bias. Between the risks of sliding into a surveillance state we can’t escape from and the dangers of perpetuating discrimination, this technology is not ready for prime time. The federal government must ban facial recognition until we have confidence that it doesn’t exacerbate racism and violate the privacy of American citizens.”
The legislation—dubbed the Facial Recognition and Biometric Technology Moratorium Act—would have placed a prohibition on the use of facial recognition technology by federal entities; conditioned federal grant funding to state and local entities on enacting moratoriums on the use of facial recognition technology; prevented federal dollars from being used for biometric surveillance systems; prohibited the use of information collected from biometric technology in violation of the law in judicial proceedings; and established a private right of action for individuals whose biometric data is used in violation of the law.
In the meantime, the U.S. federal government continues to use facial recognition technology. Sometimes, however, this use does not respect the privacy rules that the government has set for itself.
For instance, U.S. Customs and Border Protection (CBP) has rolled out facial recognition technology to ports of entry to create entry and exit records for foreign nationals. It has partnered with airlines to deploy the technology to 27 airports and is in the early stages of using the technology at sea and land ports of entry.
The U.S. Government Accountability Office (GAO) was asked to conduct an audit of how the rollout of the technology is following privacy guidelines. The GAO found that CBP has taken some steps to incorporate privacy principles into its facial recognition technology program, but that these have not been consistent or transparent to travelers.
“Further, CBP requires its commercial partners, such as airlines, to follow CBP’s privacy requirements and can audit partners to assess compliance,” the GAO said. “However, as of May 2020, CBP had audited only one of its more than 20 airline partners and did not have a plan to ensure all partners are audited. Until CBP develops and implements an audit plan, it cannot ensure that traveler information is appropriately safeguarded.”
Rebecca Gambler, director of the GAO’s Homeland Security and Justice Team, says that while it’s a positive step that CBP has audited one of its partners, the agency needs a plan to ensure that its partners are appropriately safeguarding information.
CBP also needs to ensure that it is consistently providing complete information about how to opt out of the facial recognition screening at ports of entry and how their information will be used, the GAO report said. The analysis found that CBP’s notices were often incomplete, providing little information about how to request to opt out, were outdated, or were missing all together.
“For example, during our visit to the Las Vegas McCarran International Airport in September 2019, we saw one sign that said photos of U.S. citizens would be held for up to 14 days, and a second sign at a different gate that said photos would be held for up to 12 hours (the correct information),” according to the GAO report. “The first sign was an outdated notice, as CBP had changed the data retention period for U.S. citizens in July 2018. However, CBP had not replaced all of the signs at this airport with this new information. CBP officials said that printing new signs is costly and they try to update signs when new guidance is issued, but said it is not practical to print and deploy a complete set of new signs immediately after each change or update.”
The GAO’s analysis did not include an audit of what the process is for a traveler to correct a misidentification, Gambler adds.
The Security Team Approach
While the U.S. federal government continues to evaluate regulations related to facial recognition technology, private security teams are grappling with how to use the technology to enhance security while protecting privacy.
For instance, in 2018 Taylor Swift made headlines for her team’s use of facial recognition at her concerts to identify known stalkers who might try to attend. Rolling Stone broke the story of how during the security screening process to enter the concert venue, attendees had their facial image captured and then run through a software matching system.
In a blog post, ACLU Senior Policy Analyst Jay Stanley wrote that while stalkers are a genuine concern, the security team also “tricked concertgoers” into participating in a facial recognition technology system they were not aware existed.
“The officials at the concert venue should have told people that their faces would be scanned for security purposes—preferably before they paid for a ticket,” Stanley wrote. “They also should have told attendees whether they were saving the photos and what they were planning to do with them.”
Instances like this show that there are genuine security needs that facial recognition technology can address—from identifying known stalkers to individuals on a terrorist threat watch list—but there also need to be privacy protections and procedures in place, says Eddie Sorrells, CPP, PCI, PSP, chief operating officer and general counsel for DSI Security Services.
“From a security standpoint, we see potential applications and some that are already in use for mass gatherings,” Sorrells says. “We may have someone who could be a possible terrorist actor, someone who is not supposed to be on the property, and using facial recognition technology is more efficient than having someone scan the crowd with their own eyes.”
When considering adopting a facial recognition technology, Sorrells says security teams should take the approach they take for evaluating all new technology. First, procurers should ask what problem they are trying to solve with the technology and if it will be better than the process already in place to address that problem. Then, teams should ask if the technology does what it’s assumed to do and validate that result.
“Number three, and the most important part: what is the policy and procedure around using this technology? How are you going to implement it?” Sorrells asks. “If I have a camera, versus an employee database, scanning faces and it’s compiling this data…I need to ask, ‘Why am I compiling that data? What is the reason? What is the intent here?’ And then what am I going to do with that? How do I safeguard it? How do I make sure it’s not misused?”
Having those conversations and then implementing strong policies and procedures around facial recognition technology will help ensure that it’s being used in a way that respects privacy and any existing regulatory requirements while also protecting the data gathered by the system itself.
“There are many benefits to it, and it would be shortsighted to toss out the technology as a whole,” Sorrells says. “Such as locating a missing child—I would hate to be in an area where that technology could find my child but isn’t used. But we have to balance that so that the benefits outweigh concerns.”
Megan Gates is senior editor at Security Management. Connect with her at [email protected]. Follow her on Twitter: @mgngates.