Government Facial Recognition Threatens Our Rights to Free Speech and Protest
After law enforcement infamously cleared protesters from Washington, D.C.’s Lafayette Square in June 2020 by using fists and tear gas, they used a controversial new tool to identify protesters: face recognition. Police and prosecutors selected a photo from Twitter and ran it through the National Capital Region Facial Recognition Investigative Leads System (NCRFRILS) to identify a protester and then charge him with criminal assault.
More than a dozen local departments and U.S. federal agencies had access to NCRFRILS. They used it more than 12,000 times for various investigations and cases before the government dismantled it on 1 July 2021. Despite this widespread use, defense attorneys and the public at the time had no idea the system even existed. In the Lafayette Square case, the protester argued that the government’s use of NCRFRILS violated his rights. He demanded discovery on the previously secret facial recognition system—prosecutors then dropped the charge.
This was not the first or last time that the government used facial recognition to take criminal action against protesters. In 2016, Baltimore police ran social media photos through facial recognition tools to locate, identify, and arrest people protesting the police killing of Freddie Gray. Miami police used the highly controversial Clearview AI facial recognition tool to identify and arrest a protester who took to the streets after the police murder of George Floyd. Police departments across the United States have used facial recognition tools like Clearview AI and Amazon’s Rekognition to criminally investigate protesters. (Amazon has indefinitely paused its sale of Rekognition to police.)
Government facial recognition chills and deters freedom of expression.
Government facial recognition chills and deters freedom of expression. People often need some form of privacy and anonymity to speak out, especially against police violence or other government practices they consider abusive. At a protest, anonymity comes from being just one person in a crowd. A protester can yell, chant, march, and even commit civil disobedience, but still be relatively anonymous among a sea of other protesters. But in the digital age, as data collection, storage, and analysis on a large scale gets cheaper by the day, government surveillance and analytical tools like facial recognition allow the government to easily identify everyone at a protest—threatening that anonymity.
The First Amendment to the U.S. Constitution protects the rights to free speech and association, including the abilities that people need to exercise those rights—such as privacy and anonymity. So, the First Amendment protects people’s anonymous speech, private conversations, confidential receipt of unpopular ideas, ability to gather news from undisclosed sources, and confidential membership in groups, especially dissident ones.
During the Civil Rights Movement, for example, Alabama tried to oust the National Association for the Advancement of Colored People (NAACP) from the U.S. state unless it handed over a list of its members. The U.S. Supreme Court held that this violated NAACP members’ freedom of association. Without their anonymity, the Court wrote that members would face “economic reprisal, loss of employment, threat of physical coercion, and other manifestations of public hostility.”
Research confirms that surveillance like facial recognition chills free expression. A 2013 report by the City University of New York School of Law documented how the New York Police Department’s (NPYD) extensive post-9/11 surveillance throughout the northeast United States of Muslims—U.S. citizens and new immigrants alike—created a pervasive climate of fear and distrust and stifled Muslims’ speech and association. One young Brooklyn resident remarked: “Free speech isn’t a privilege that Muslims have.”
Many Muslim students said they were unwilling to protest or otherwise express anger at the NYPD itself. And many Muslims began avoiding using the word “jihad” altogether, even though translated from Arabic it simply means “to strive” and is an everyday term for making moral efforts. But constant government surveillance instilled in many a belief that they must constantly watch what they say and do.
“Not everyone has the same level of imaan [faith]. They’ll get discouraged. People tell me ‘I’ll make my salaah [prayer] at home.’ They mention the [NYPD] camera right outside the mosque as the reason,” said a Brooklyn imam interviewed for the report.
Other research also confirms the chilling effect of government surveillance. A 2016 study by a professor at Wayne State University of online speech showed that when people know they are being surveilled and believe the surveillance is unjustified, they are less likely to speak out—even in a friendly political climate. A 2013 survey by the PEN American Center found that writers respond to government surveillance by not writing about sensitive topics, such as nuclear weapons or child abuse, not talking about them on the phone or in email, and avoiding social media. Similarly, a 2016 survey published by the Berkeley Technology Law Journal found that Internet users respond to government surveillance by avoiding researching sensitive topics online.
Moreover, many facial recognition algorithms are biased against Black and Brown people—which is of particular concern when it comes to criminalization, whether of protesters or in general. As brought to light by Black-led protests during the past few years, including in response to the police killings of George Floyd and Michael Brown, the U.S. criminal legal system arrests, prosecutes, incarcerates, and even kills Black and Brown people in vastly disproportionate numbers. Police use of biased surveillance algorithms only exacerbates that problem.
A landmark 2018 study co-authored by Joy Buolamwini and Timnit Gebru for the Algorithmic Justice League Project found that three commercially available facial recognition algorithms misidentified the gender of faces of women and darker-skinned people at far higher rates. For example, IBM’s algorithm misclassified the gender of darker-skinned women 35 percent of the time, but of lighter-skinned men only 0.3 percent of the time. Follow up research in 2019 found the same problem with two other algorithms, including Amazon’s Rekognition. Buolamwini and Gebru’s work helped spur more investigations, such as a 2018 test by the American Civil Liberties Union (ACLU), which found that Rekognition incorrectly matched 28 members of the U.S. Congress, disproportionately people of color, to mugshots.
Government facial recognition’s threat to freedom of expression through criminalization and racism is a big reason why my organization, the Electronic Frontier Foundation (EFF), supports a ban on the tool. It is so destructive that governments should not use it at all. The proven flaws in many facial recognition algorithms show that it is not a reliable tool for protecting the public. And in any case, this tool simply gives governments too much power. Civil liberties are just as important for security as anything else. Civil liberties protect people who record police misconduct and abuse, journalists who expose government corruption, and security researchers who find dangerous flaws in medical and other commercial products.
Others are coming to the same conclusion. As of January 2020, California police are under a three-year moratorium on using facial recognition on mobile and body-worn cameras. Major cities like Boston and San Francisco have banned government facial recognition entirely. And EFF supports a ban at the U.S. federal level. We must not let government surveillance deter people from exercising their fundamental rights to protest and speak freely.
Mukund Rathi is an attorney at the Electronic Frontier Foundation.