It was a social media explosion of sorts. Suddenly, thousands of people were uploading to Facebook, Twitter, and Instagram selfies of what they would look like in 10, 20, or 30 years.
They had created the images using FaceApp, a popular smartphone application that allows users to upload photos and select various filters to make themselves appear younger or older. But, most users did not realize that by uploading their likeness to FaceApp, they were also giving the company the right to use that image for commercial purposes and to improve its facial recognition features through artificial intelligence programs.
After users became increasingly aware about how FaceApp could use their photos, they also raised concerns about whether the Russian-based company was sharing data it collected with the Russian government—and what ability they had to stop it.
“Given the popularity of FaceApp and these national security and privacy concerns, I ask that the FBI assess whether the personal data uploaded by millions of Americans onto FaceApp may be finding its way into the hands of the Russian government, or entities with ties to the Russian government,” wrote U.S. Senate Minority Leader Chuck Schumer (D-NY) in a letter to FBI Director Christopher Wray and Federal Trade Commission Chairman Joseph Simons. “Furthermore, I ask that the FTC consider whether there are adequate safeguards in place to prevent the privacy of Americans using this application, including government personnel and military service members, from being compromised.”
FaceApp has denied that it shares any user data with the Russian government. But the incident is the latest in a broad conversation about technology, accuracy, and expectations of user privacy related to facial recognition technology.
In 2017, the U.S. House of Representatives Oversight Committee found that 18 U.S. states have memorandums of understanding with the FBI that allows them to share databases with the Bureau—effectively resulting in more than half of American adults being part of a facial recognition database.
The U.S. Government Accountability Office (GAO) recommended in 2016 that the FBI make changes to its facial recognition database to improve data security and ensure privacy, accuracy, and transparency of the data that is included. As of April 2019, the FBI had not implemented those recommendations fully.
“Facial recognition is a fascinating technology with a huge potential to affect a number of different applications. But right now, it is virtually unregulated,” said Oversight Committee Chairman Elijah Cummings (D-MD) in a hearing on the technology in May 2019.
Under Cummings’ direction, the committee has held several hearings on facial recognition technology, and its subcommittees are conducting deeper dives to provide recommendations on how to make use of the technology more accurate, while protecting Americans’ right to privacy and equal protection under the law.
“Facial recognition technology misidentifies women and minorities at a much higher rate than white males, increasing the risks of racial and gender bias,” Cummings said.
Joy Buolamwini, founder of the Algorithmic Justice League, was invited to testify at one of the Oversight Committee’s hearings. She explained that facial recognition technology is being rapidly adopted, but not always in a responsible way. And that can have negative impacts for marginalized people—such as minority populations already at risk for discrimination.
“Facial analysis technology that can somewhat accurately determine demographic or phenotypic attributes like skin type can be used to profile individuals, leaving certain groups more vulnerable for unjustified stops,” Buolamwini said in her testimony. “An Intercept investigation reported that IBM used secret surveillance footage from NYPD and equipped the law enforcement agency with tools to search for people in video by hair color, skin tone, and facial hair. Such capabilities raise concerns about the automation of racial profiling by police in the United States.”
These concerns have led some municipalities, such as San Francisco, to outright ban the use of facial recognition technology for the time being. The city was the first in the United States to prohibit use of the technology by city agencies, and the ban carries additional weight because it places limits on a region recognized for its role in advancing technology.
“In the absence of responsible federal limits on mass surveillance, cities have a duty to act,” wrote Aaron Peskin, San Francisco city supervisor and sponsor of the ban, in a tweet. “Face recognition technology disproportionately harms women & communities of color, and exposes us all to a dystopia of Orwellian proportion.”
Shortly after San Francisco’s ban went through, Oakland, California, passed a similar measure. And as of Security Technology’s press time, Somerville, Massachusetts, was also considering a ban.
Some security technology manufacturers are heeding these warnings about the development of facial recognition technology. In 2018, Axon—which owns and manufacturers Taser—created the Axon AI and Policing Technology Ethics Board to provide the company advice about the development of AI products and services.
The board released its first report earlier this year, which found that facial recognition technology is not reliable enough yet to justify its use on body-worn cameras.
“At the least, face recognition technology should not be deployed until the technology performs with far greater accuracy and performs equally well across races, ethnicities, genders, and other identity groups,” the board said. “Whether face recognition on body-worn cameras can ever be ethically justifiable is an issue the board has begun to discuss, and will take up again if and when these prerequisites are met.”
The board also said that jurisdictions should not adopt facial recognition technology without “open, transparent, democratic processes, with adequate opportunity for genuinely representative public analysis, input, and objection.”
Based on these findings, Axon CEO and founder Rick Smith said in statement that the company would not move forward—as of now—on commercializing face matching products on its body cameras.
“We do believe face matching technology deserves further research to better understand and solve for the key issues identified in the report, including evaluating ways to de-bias algorithms as the board recommends,” Smith explained. “Our AI team will continue to evaluate the state of face recognition technologies and will keep the board informed about our research.”
Megan Gates is editor-in-chief of Security Technology. Contact her at firstname.lastname@example.org; Follow her on Twitter: @mgngates.