Skip to content
Menu
menu

Illustration by iStock; Security Management

What ‘Responsible AI’ Means in the Modern Security Industry

Whether your business is developing artificial intelligence (AI) solutions in-house, incorporating existing AI solutions into your products, or implementing AI and analytics in your operations, it’s important to understand the potential dangers and pitfalls that may exist.

The security industry has an advantage here. Since security providers have been leveraging AI-based solutions for a long time, our industry has had a head start grappling with some of today’s most pressing AI-related issues. Of course, that doesn’t mean we’ve solved every moral quandary or addressed every ethical consideration, but it does mean that security providers probably don’t need to start from scratch when considering how to approach AI.

The Organization for Economic Cooperation and Development (OECD) has developed an initial framework to help shape a human-centric approach to AI, and this has become an important, foundational resource for understanding what responsible AI means in practice.  

In today’s world, responsibility doesn’t just mean making sure the technology is not used in an illegal or inappropriate manner—it means prioritizing openness and transparency to ensure customers understand how the technology works and how to use it most effectively. Technology providers are already finding that a responsible approach to AI is helping them build trust with their channel partners and, ultimately, with their customers as well.

AI and Its Drawbacks in the Modern Security Landscape

Security providers use AI to accomplish two primary goals: automating tasks and generating actionable insights.

The ability to automate certain elements of detection and response has revolutionized video analytics as we know it. No longer do businesses need to rely on security personnel to monitor banks of wall monitors—their security solutions can automatically alert them when a potential security incident occurs. As AI advances and edge devices become more powerful, analytics are becoming more accurate and reliable. These capabilities have become essential to modern security deployments, driving the industry toward more efficient and proactive solutions.

But automation comes with drawbacks, too—and deploying AI responsibly means understanding and accounting for those drawbacks. This starts with conducting a thorough risk assessment of each AI use case.

For example, AI-based analytics are very good at reducing false alarms by verifying potential security incidents before sending alerts—but what is the potential fallout if the AI makes the wrong decision? It might not be a big deal if an AI solution fails to notice a pedestrian cutting across a courtyard at night. But in other circumstances, a trespasser could be the precursor to theft, vandalism, or even sabotage.

If AI is going to be trusted to make important decisions, it’s critical to understand what the consequences might be if it gets those decisions wrong—and to know where it is necessary to have a human in the loop. Customers can only conduct that assessment if their partners are providing them with accurate information.

When Evaluating Risk, Transparency Is Essential

To perform a risk assessment, businesses need to know exactly what their AI solutions are capable of and what they are not. This means providers have a responsibility to ensure that they are communicating what their solutions can and cannot do in a clear and open manner. Providers that misrepresent their solutions can lead customers to lose faith in the technology as a whole—and with AI tools becoming increasingly common, poisoning the well with misinformation could be extremely damaging.

For businesses in sensitive industries like critical infrastructure, manufacturing, and chemical engineering, the potential impact of a missed alarm could be dire. If video analytics fail to detect a saboteur sneaking onto the property or a chemical vat overheating, it can mean more than just lost profits—it could mean serious injuries or even lost lives. That means technology providers need to ensure they are accurately representing their capabilities so customers can determine which solutions best meet their needs and whether additional supporting controls may be necessary. They need to understand how the technology works and what the consequences may be if the solution doesn’t function as intended.

For instance, if one type of camera can’t provide the necessary coverage, a different deployment configuration could help. If one analytic struggles to produce the desired insights, additional data sources may be needed. Ultimately, it all comes back to understanding the limitations of the technology and how they impact the needs of the customer.

Using Data Analytics in a Responsible Manner

This brings us back to actionable insights. Today’s AI-driven security solutions generate significant volumes of data that organizations use to better understand what is happening in a given location and make decisions based on the information provided. It’s critical to ensure organizations understand the limitations of both the AI model and the data it is analyzing, and that they are aware of any potential biases.

If certain devices or analytics consistently struggle to perform in different lighting or weather conditions, that is critical for the customer to know—and will almost certainly impact how the technology is used. Likewise, if an audio analytic struggles to differentiate sounds in a noisy environment, a manufacturing or construction company may want to go in a different direction. Knowing how solutions perform in different conditions will inevitably factor into how they are deployed.

Generative AI is still in its early stages, but it already shows great promise when it comes to augmenting safety and security capabilities with features like improved user interfaces and plain language search capabilities. But AI is evolving rapidly, which means the impact of these new AI tools—both positive and negative—is not yet well understood. That means providers have a responsibility to conduct robust testing in real-world situations. These AI solutions don’t have the same level of maturity as older analytics solutions, and providers need to gauge how the algorithms perform in the environments where customers will actually deploy them. It’s important to be cautious with newer technology, and the more information providers can give customers, the better. No solution is perfect, but identifying where potential problems may arise is an essential first step toward mitigating them in a responsible manner.  

Ensuring Responsible Use Poses a Unique Challenge

A technology provider’s responsibility doesn’t end when the customer takes possession of the product. It is also important to limit the potential for abuse and to ensure the product is not used for illegal or unethical purposes. That’s easier said than done, of course—there are ways to misuse any technology, and security and surveillance devices are no different. But there are concrete steps technology providers can take to reduce the potential for abuse and ensure they are working with customers that share their vision for responsible use. Yes, that may involve refusing to sell to customers with a penchant for using technology in an irresponsible manner, but there are also more systemic practices providers can adopt to better position themselves for success.

Perhaps the most important step involves cultivating successful, long-term relationships throughout the channel. Building relationships with customers is important, but it’s equally important to develop a strong rapport with the vendors and integrators who provide key knowledge and services throughout the sales pipeline and the product lifecycle. A big part of that ties back to being transparent and open about your products and their capabilities. The more vendors and integrators can trust you and your products, the better your relationship will be—and the better they will be able to recommend you to customers who align with your values and vision.

Ultimately, technology providers have a responsibility to work with partners who share their ambitions when it comes not just to AI, but their broader sense of ethics and values. When organizations have a shared set of values, that creates trust—and trust can provide a significant competitive advantage. After all, who is a vendor or integrator more likely to recommend to their customers—one that acts impulsively, or one that has demonstrated a commitment to responsible behavior? That choice is clear, and now is the time to start building that reputation.

As AI evolves, businesses that have exhibited a thoughtful, considered, and responsible approach to AI from the early days of the technology will have earned the trust of their partners and customers in a way that sets them apart from their competitors.

Establishing a Long-Term Vision for Responsible AI

Given the rate of technological advancement, it’s hard to predict what AI will look like one year from now—let alone five or 10. That uncertainty presents challenges, but it also makes this the perfect time to commit to openness and transparency.

As AI and its use cases continue to evolve, providers that have taken an honest and responsible approach to the technology will find their partners and customers turning to them for reliable advice and expertise. By improving their own AI practices and fostering strong relationships throughout the channel, today’s security providers can lay the groundwork for not just responsible AI development, but responsible deployment and usage as well.  

Mats Thulin is the director of core technologies at Axis Communications, where he is responsible for the long-term technology development in video analytics, media, and security. With a diverse background in both large enterprises and startups, Thulin brings a wealth of business and technology expertise. He holds a master’s degree in electrical engineering from the Lund Institute of Technology.

arrow_upward