Skip to content
Menu
menu

Illustration by iStock, Security Management

The All-Too-Dumb World of Smart Cities Technology

You’re a security professional, and your job is to keep data safe. Your clients ideally do their part. They don’t collect mountains of sensitive information they don’t need, and they definitely don’t share that data with dangerous third parties. But cities and towns are doing exactly this when they adopt so-called “smart cities” tech—Internet-connected sensors, cameras, and meters that blanket our communities.

Smart cities tech runs the gamut from policing tools and transit sensors to educational software and more. It’s supposed to help municipal governments. In practice, the tech routinely hoovers up sensitive data on city residents, including their movements and audio, video, and biometric data—even before cities know what to do with it—and even though they may never know what to do with it.

Typical municipal policies make this data easily accessible to police and immigration agencies, who know exactly what to do with detailed data tracking city residents. They weaponize cities’ data against Black, Indigenous, and people of color (BIPOC) individuals, immigrants, LGBTQIA people, and other over-policed communities. Newfangled tools support old-fashioned biased policing.

To appreciate the scope of the problem, consider the range of smart cities tools. Policing tools include facial recognition software, which may heighten BIPOC individuals’ risk of false arrest, especially as typically used by police departments. Predictive policing tools falsely justify focusing police attention where it’s always gone, on low-income Black and Latinx communities: a high-tech rationalization for longstanding racist over-policing. Neighborhood watch apps recruit private citizens to spy on their neighbors, extending the reach of police surveillance and putting Black and brown neighbors at risk of dangerous interactions with police and vigilantes.

Meanwhile, in the transit arena, Smart Cities tools vacuum up residents’ location data. Touchless payment systems track public transit riders; automated license plate readers track drivers; bike and scooter share systems track users by their pick-up and drop-off points. Transit cameras and facial recognition add an additional layer of surveillance. A small fraction of this data might, in theory, make public transit work better, but in practice, most “smart” transit tools fall short of the marketing hype. Meanwhile, this location data is available to the police.

In our schools, smart cities tech puts students at risk without any established benefit. Educational technology (EdTech) is everywhere. Students take online exams under the watch of online proctoring software that collects audio, video, and even biometric data. Security cameras follow students in the hallways of their schools. Online study tools gather a range of sensitive student data without the protections of the Family Educational Rights and Privacy Act, the U.S. federal law governing access to student records. It is often not clear that EdTech serves actual educational needs, and it is sometimes clear that it hurts students. EdTech data is available to police and immigration agencies, and police have already used it to choose which students to surveil.

Our organization, the Surveillance Technology Oversight Project (S.T.O.P.), recently launched Just Cities, a public education campaign to help communities evaluate and oppose dangerous smart cities technology. Produced in partnership with advocates, community members, and policymakers, our three-part framework identifies common risks of expanding municipal technology in policing, transit, and education.

Where tech works, we suggest safeguards to protect communities. For example, and in the first place, cities should guard against the cooption of transit and education tech for policing purposes. Under the American legal system, transit system and EdTech data (like most other municipal data) is available to the police with at most a warrant. We suggest bans on such access paired with warrant requirements and data destruction practices that effectively prevent use of the data for law enforcement purposes. We also suggest that communities identify their needs first, and only then authorize limited data collection; that they seek community input and approval when considering new tech; and that, in addition to other recommendations in the guide, they use third-party evaluations to test tools’ effectiveness and impact on overpoliced communities.

Where tech is a threat, we put aside these recommended fixes and offer arguments for banning smart cities tech outright. Facial recognition has no place in our schools, transit system, or in the hands of law enforcement. EdTech tools that pose equity concerns—like algorithms designed to detect misconduct, that in practice flag people with disabilities—should be rejected entirely. Oftentimes, smart cities tech is just dumb.

Cities need to safeguard guard the sensitive resident data in their possession. By taking a cautious approach to so-called smart cities tech, they can sidestep avoidable security problems arising from collecting too much data that they don’t need and that attracts law enforcement’s attention. We invite you to visit the Just Cities site to learn more.

Eleni Manis is research director at the Surveillance Technology Oversight Project (S.T.O.P.). Manis joined S.T.O.P. after working on government agencies’ technology policies at the New York City Mayor’s Office of the Chief Technology Officer and the New York City Mayor’s Office of Operations. She began her career as a professor of ethics and political philosophy at Franklin & Marshall College, and now works at the intersection of her expertise in ethics, democratic justice, and technology policy.

© Eleni Manis

arrow_upward