Skip to content
Menu
menu

Illustration by Security Management

IBM Discontinues Facial Recognition Business to Advance Racial Equality

IBM announced it will discontinue its general-purpose facial recognition business and opposes the use of the technology to conduct mass surveillance and racial profiling.

“IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency,” wrote IBM CEO Arvind Krishna in a letter to members of Congress. “We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”

Krishna shared IBM’s decision—which Axios reports was made over several months—in a letter about IBM’s stance against racial inequality in America, referencing IBM President Thomas J. Watson who in 1953 refused to enforce Jim Crow laws at IBM facilities, and the need to fight against racism.

“Yet nearly seven decades later, the horrible and tragic deaths of George Floyd, Ahmaud Arbery, Breonna Taylor, and too many others remind us that the fight against racism is as urgent as ever,” Krishna wrote. “To that end, IBM would like to work with Congress in pursuit of justice and racial equality, focused initially in three key policy areas: police reform, responsible use of technology, and broadening skills and educational opportunities.”

Axios reports that IBM will continue to support existing clients using its facial recognition technologies, but that it will not “market, sell, or update these products.”

Facial recognition technology has been scrutinized because studies find that many of the algorithms the technology depends upon exhibit different levels of accuracy depending on subject’s sex, age, and racial background. A recent U.S. National Institute of Standards and Technology (NIST) study evaluated 189 software algorithms from 99 developers, finding that for one-to-many matching there were higher rates of false positives for African American females than for any other group.

“Differentials in false positives in one-to-many matching are particularly important because the consequences could include false accusations,” the report explained.

Despite findings like the NIST analysis, law enforcement agencies are using and adopting facial recognition technology to conduct surveillance operations—including of the Black Lives Matter protests sweeping the United States and the world to protest the murder of George Floyd, an unarmed black man who was killed while being arrested by Minneapolis police, and police brutality in the United States.

Buzzfeed obtained a two-page memo earlier in June that authorized the U.S. Drug Enforcement Agency (DEA) to conduct covert surveillance and collect intelligence on individuals participating in these protests. The memo also authorized the DEA to share the intelligence it gleaned with local and state law enforcement to “intervene” to “protect both participants and spectators in the protests,” along with arrest individuals who allegedly violated federal law.

Developments like this show that the United States needs to create federal privacy protections to restrict data collection, sale, and exploitation, and protect marginalized communities, writes Justin Sherman, a fellow at the Atlantic Council’s Cyber Statecraft Initiative, in an op-ed for WIRED.

“Post-9/11 surveillance of Muslim communities—including through CIA-NYPD cooperation—and the FBI's COINTELPRO from 1956 to 1971, which targeted, among others, black civil rights activists and supporters of Puerto Rican independence (though also the KKK), are notable state surveillance programs that may come to mind," Sherman explains. "But the history of surveillance in the U.S. is much richer, from custodial detention lists of Japanese Americans to intense surveillance of labor movements to stop-and-frisk programs that routinely target people of color."

In the June 2020 issue of Security Technology, Dan Grimm—vice president and general manager of SAFR, a facial recognition platform—writes that “smart, national-level legislation” is needed to help technology developers create facial recognition solutions that benefit society and that the industry itself needs to be proactive in addressing issues with their products.

“Today’s facial recognition systems are still imperfect,” Grimm explains. “And performance varies widely among them. Some offer remarkably low levels of bias while others offer recognition accuracy 10 times worse for black women than for white men. For facial recognition to be a positive force in society, it is imperative that developers both clearly disclose the level of bias they demonstrate and work tirelessly to eliminate it.”

arrow_upward