Skip to content
Menu
menu

Illustration by Security Technology; iStock

Addressing Ethical and Privacy Issues with Physical Security and AI

In my early days in the security guard industry, my job revolved around monitoring live surveillance cameras. Tasked with identifying any security-related anomalies, my reactions were guided by my training and insights. But this human-only oversight meant potentially missing activity or incidents, sometimes requiring lengthy VCR recording reviews to pinpoint events.

Back then, the intelligence in CCTV systems was human, and reliability was relatively limited. The game changed slightly with the advent of first-generation video motion detection technology. This new—albeit basic—feature could identify pixel changes within a predefined area and notify me and other security guards of any movement. This capability didn't make CCTV “smart,” but enhanced my efficiency and accuracy. Legal considerations for its use were mainly focused on privacy concerns related to camera placement, so engagement with the legal department was minimal.

The landscape began to significantly transform with the arrival of IP cameras and Network Video Recorders (NVRs) equipped with motion detection, alongside cameras boasting advanced image processing. This evolution meant that surveillance systems were no longer solely reliant on human intelligence but were now embedded with a rudimentary form of synthetic intelligence—offering reliability that far surpassed the days of manual monitoring. Still, the effectiveness of these systems hinged on human operators for supervision and direction.

The true transformative shift has occurred in recent years by incorporating artificial intelligence (AI) into integrated security systems and developing advanced, intelligent pattern and behavior recognition. The new generation of AI-enhanced drones and robots has also notably extended the reach and efficiency of security systems, allowing for more comprehensive monitoring of campus activities and more reliable detection of suspicious behaviors than ever before.

Additionally, advancements in communication technologies, such as 5G and Wi-Fi HaLow, have extended the operational range of these AI-powered tools, pushing the boundaries of perimeter security. These technological strides have empowered our security systems to identify anomalies and initiate responses with speed and accuracy beyond human capacity. The power and capabilities of AI-enabled security systems are reaching new heights in autonomous intelligence and can reliably outperform a human operator. Video systems can be continuously alert   without limitations to scale or duration of surveillance. AI video automates tracking and monitoring for anything it's been programmed to do—and that can be the problem.


AI video automates tracking and monitoring for anything it's been programmed to do—and that can be the problem.


The Ethics and Privacy Issue

Video surveillance AI leverages convolutional neural networks (CNNs) to analyze and interpret visual imagery. Through deep learning algorithms, the system continuously learns from a database of video footage, enhancing its accuracy and reliability over time. The more extensive and diverse the database, the more effectively the AI can recognize patterns, detect anomalies, and improve performance.

For instance, video AI can recognize and track specific individuals to initiate a person-specific response if programmed. This fact is the crux of the ethics and privacy concerns that surround these technologies and needs to be understood by anyone considering deploying AI in their security program.

AI Bias. Humans and the AI tools we engineer inherently possess biases. In the case of AI, bias stems from the algorithms or the training datasets for its development.

AI video engineers have the power—and motivation—to substantially reduce these biases by adopting conscientious practices during the development phase. This includes the use of varied, high-quality training data and conducting thorough bias tests on algorithms. Such a proactive stance is vital for ensuring AI-based surveillance systems are applied justly and operate with fairness.

Biases manifest in various forms, such as racial, gender, socioeconomic, and age-related biases, often due to unrepresentative training data or data impacted by existing prejudices. For instance, inadequate lighting during the capture of video images can generate “bad video,” leading to unreliable references in the training database and, thus, affecting AI's precision.

At times, biases are ingrained within the algorithm itself, potentially skewing results in favor of specific groups. Additionally, technologies like facial recognition in surveillance systems may show higher rates of misidentifying races or misclassifying genders, disproportionately impacting certain groups. Socioeconomic and age biases could also result in inconsistent surveillance focus, unfairly targeting or overlooking certain communities.

Left unaddressed, these biases diminish the capacity of AI to deliver fair security measures and can contribute to societal disparities. Hence the importance of diversifying training datasets—along with stringent testing and transparency during AI's development phase—cannot be overstated to help minimize bias and enhance fairness in AI-driven surveillance systems. As an end-user considering AI video, it’s imperative to ask the manufacturer, “How have you addressed and minimized bias?”  

Notably, there are scenarios where precise individual identification is intentional and necessary, such as in casinos where facial recognition AI is coupled with incident reporting systems to spot known offenders and take appropriate action. This layers on an undeniable complexity that warrants measured review and consideration of intent prior to deployment.

Privacy Concerns. Universal laws regarding AI and privacy don't exist—yet—and ethical frameworks are still largely undeveloped. Companies adopting AI video solutions must consider ethics, including privacy, when developing their AI-enabled security systems.

There must be a balance between ensuring public and private safety versus encroaching upon personal privacy. This balance becomes precarious when video technologies can surveil individuals in previously impossible ways, potentially invading personal liberties and privacy.

Instances of AI surveillance overreaching is not uncommon, where systems designed to enhance security might inadvertently or deliberately extend into overly intrusive monitoring, thereby impacting individual freedoms. For example, the well-known Chinese state AI-enabled monitoring of its Uyghur Muslim minority in Xinjiang, and the Australian use of AI to help enforce COVID-19 quarantine rules.

The distinction between public interest and private life may be blurred, raising ethical concerns about how AI-enabled surveillance is deployed. Integrating AI into surveillance systems also increases potential concerns surrounding the collection, processing, and storing of sensitive biometric information.

For AI video systems to be ethically justified, they must be necessary and proportional to the perceived threat or need. Individual identification is more challenging to justify. This principle requires a thoughtful assessment of surveillance initiatives to ensure they do not exceed what is needed to address specific security concerns. The challenge for our industry lies in defining and implementing practices that strike an effective balance.

An excellent instance of this was a law enforcement project executed with an AI chip maker. The department set up video surveillance that leveraged AI of a neighborhood to evaluate specific and anonymized foot traffic to determine likely locations for illicit drug sales. Further investigation efforts confirmed the initial findings.

The ethical deployment of AI in surveillance necessitates clear policies and robust mechanisms for public oversight. Transparency about how surveillance technologies are used, the types of data collected, and their intended purposes play a critical role in maintaining public trust.

Furthermore, there must be established pathways for individuals to challenge or seek redress for privacy violations. Accountability mechanisms, especially policies, ensure that companies employing surveillance technologies can be held responsible by their ethics program for their actions, reinforcing the ethical imperative to respect and protect individual privacy rights in the age of AI-enhanced security measures.


Left unaddressed, these biases diminish the capacity of AI to deliver fair security measures and can contribute to societal disparities.


Some Practical Advice

Companies that have, or are considering, AI-enabled systems should reflect on the following:

Use Cases, Impact Assessments, and Policy. Develop a set of use cases that narrow how and when the AI system will be applied and implement a policy that clearly limits its use to the defined use cases.

Consider undertaking a privacy impact assessment to demonstrate due diligence and avoid unnecessary privacy risks. The policy needs to incorporate any federal or state/provincial laws in any country where the system is being deployed. The EU General Data Protection Regulation (GDPR) is a prime example of a data and privacy law that organizations operating in Europe must strive to comply with.

Limit Capabilities. Identify the capabilities needed with the use cases and limit them to only the necessary ones.

For example, one retail AI manufacturer anonymizes human beings by converting the video image to stick figures, and the AI training focuses on the behavior of stick representations. This significantly minimizes the occurrence of AI bias when the system only needs to evaluate behavior without demographic details.

Discuss AI Development with the Manufacturer. As part of a due diligence process, formally discuss and document how the AI vendor developed the application and the data sets to train the AI. Ask if the algorithms were independently audited. Particularly, ask what data was used and how. It's advisable to be skeptical, given the potential issues at stake.

Test Cases and Proof of Concept. Seriously consider conducting trials and proof of development before investing in AI video. Ensure that the company's stakeholders, especially the Legal and Ethics departments, are engaged and that their requirements are factored into the tests.

Communicate. Reach out to all stakeholders and engage throughout the evaluation and adoption effort. While AI may be used for security, its potential impact on the company's brand impacts everyone.

The integration of advanced CNN AI into both traditional and emerging security technologies marks a revolutionary shift in how security systems safeguard people and assets. While the risk of unfair biases or deliberate misuse is significant, careful, and intentional, planning can maximize system performance to meet organizational needs, all while upholding your principles of ethics and fairness.

William Plante is the director, integrated solutions risk group, at Everon Solutions and a member of the ASIS International Emerging Technology Community Steering Committee.

arrow_upward