Skip to content
Menu
menu

Photo illustration by iStock

Should Facial Recognition Be Banned?

In a time of crisis, it’s tempting to let down our guard about surveillance. But privacy and human rights should always take center stage when considering how technology should be employed.

Like other technologies, facial recognition can be misused. Just as the absence of governmental oversight and traffic laws would substantially increase the potential for automobiles to inflict harm; imagine driving down the street where there were no rules of the road, no speed limits, no lane markers or stop signs. It would be chaos. Where we are today with facial recognition in the United States is akin to lots of open road with very few traffic signs.

Civil rights organizations are right to call for government regulation. As an industry, we should join them in calling for smart, national-level legislation that holds technology developers and those who implement it to high standards while allowing room to develop and deploy in ways that benefit society. We have an obligation to join with all stakeholders to work hand in hand to move this process forward.

But let’s not forget that technology is neither intrinsically good nor bad. It’s how we use it that matters. In fact, we at SAFR firmly believe that technology can play a crucial role in upholding our democratic values instead of undermining them.

Blanket bans on facial recognition unnecessarily limit the positive uses. From improving school safety and finding missing people to enhancing security checkpoints at airports to streamlining the process for entering a secure building, facial recognition can be a force for good and can be applied to help make life safer and more convenient.

Today’s facial recognition systems are still imperfect. And performance varies widely among them. Some offer remarkably low levels of bias while others offer recognition accuracy 10 times worse for black women than for white men. For facial recognition to be a positive force in society, it is imperative that developers both clearly disclose the level of bias they demonstrate and work tirelessly to eliminate it.

As an industry, we must be proactive in addressing these issues by explaining the missions our use of facial recognition supports, how we are developing our technology to offer excellent performance across the breadth of humanity, and the value of our technology relative to the alternative.

Is it okay to ask security officials to attempt to match images of persons of interest to hundreds of video feeds manually? What cost are we willing to bear? Too many vendors in this space either lack the awareness around bias, the willingness to be transparent about it, or the gumption to do something about it. This silence is enabling the recent proliferation of anti-facial recognition legislative efforts in the United States.

When evaluating facial recognition systems, it’s important to ask tough questions because not all facial recognition solutions are created equal. What is the vendor doing to eliminate bias in design, development, and deployment? How is the vendor making it simple for customers to deploy the technology in a responsible—and ethical—manner?

At SAFR, we stand firmly against the misuse of facial recognition. We support smart legislation. But we also believe that the technology provides many positive use cases that can improve daily life and make us safer. Banning its use altogether does not help to shape its future. Instead, we should focus on regulations that support innovation to ensure its ethical and responsible use and focus on the development of positive applications across public and private sectors.

Dan Grimm is vice president and general manager of SAFR®, a division of RealNetworks.

arrow_upward