Skip to content

Illustration by Security Management

Portland Bans Private Use of Facial Recognition Technology

City council members in Portland, Oregon, voted unanimously to enact the strictest ban on facial recognition technology in the United States.

The measures—approved on Wednesday, 9 September—bar the city’s agencies from using facial recognition technology and prohibit private entities from using the technology in public spaces, according to The Oregonian/OregonLive.

“We own our privacy and it’s our obligation to make sure that we’re not allowing people to gather it up secretly and then sell it off for either profit or for fear-based activities,” said Portland Commissioner Jo Ann Hardesty, who introduced the measures along with Mayor Ted Wheeler.

The ban on the city’s use of facial recognition technology goes into effect immediately. The ban on private use goes into effect on 1 January 2021 to provide a transition period and clear answers about how and when the ban applies.

“For instance, city officials clarified for one resident that the ban would stop a private enterprise like Starbucks from using facial recognition in public spaces such as sidewalks, as well as from using it within their retail spaces,” according to ZDnet. “The question, however, was initially met with some uncertainty from lawmakers.”

Some business organizations have requested that the council narrow the scope of the ban on private entities use of facial recognition technology, including the Oregon Bankers Association. It wrote a letter to the city explaining that facial recognition technology has “an important role to play in keeping our banks and their customers and employees safe.”

The Security Industry Association (SIA) also released a statement, calling Portland’s ban “shortsighted decisions that do not consider effective and beneficial applications of facial recognition.”

“Turning back the clock on technological advancement through a complete ban on private-sector use of technology that clearly keeps our fellow citizens safe is not a rational answer during this period of social unrest in Portland,” said SIA CEO Don Erickson. “It is hardly a model approach to policymaking that any government should adopt. Let’s act together now to thoughtfully educate the public about the legal and effective use of facial recognition technology while being mindful of legitimate questions raised about the impact of this technology on all stakeholders, including communities of color.”

The accuracy of facial recognition technology has been a major point of contention and received scrutiny from researchers over the last several years. In 2019, the U.S. National Institute of Standards and Technology (NIST) evaluated 189 software algorithms from 99 developers and found that most programs exhibit different levels of accuracy depending on demographics, including sex, age, and racial background.

“The study highlighted several broad findings across the algorithms: for one-to-one matching, Asian and African American faces had higher false positive rates than Caucasian images,” according to previous coverage in Security Management. “Among American-developed algorithms, there were similar rates of false positives in one-to-one matching for Asians, African Americans, and native groups. The American Indian demographic had the highest false positive rates.”

In their research, the NIST report authors noted that these false positives can be especially harmful depending on how it is applied. For instance, “erroneously alerting on an innocent person who may resemble someone on a watch list—could have long-term effects through a false accusation or potential false imprisonment.”

The technology could also be used—intentionally or unintentionally—to carry out racial profiling. This is one reason that IBM announced in June 2020 that it would discontinue its general-purpose facial recognition business and oppose the use of the technology for mass surveillance and profiling.

“IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency,” wrote IBM CEO Arvind Krishna in a letter to Congress. “We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”