Skip to content
Menu
menu

Illustration by iStock; Security Management

Quantifying Commitment: The Value of Responsible Technology in Security

Responsible technology is evolving into a fundamental license to operate and do business. Recent market research reveals that 85 percent of technology buyers believe that responsible innovation and the use of artificial intelligence, video analytics, and video security systems will be a prerequisite in the future. This shift isn’t merely aspirational; it represents a concrete change in how organizations evaluate and select technology partners.

As I meet with security leaders worldwide, I hear the same thing over and over: Responsible technology matters, but most companies are a little stuck on how to show it—and prove it.  

We also need to be realistic. While most buyers say responsible technology matters, it's still more aspirational than a concrete procurement requirement. But still, times are changing, and as an industry, we had better embrace it.

Setting the Foundation Through Policy and Practice

The first step in quantifying commitment to responsible technology lies in clear policies and governance.  As responsibility should cover both innovation and use, technology providers must also help define what “responsible use” actually means in practice and establish criteria that companies can evaluate.

Milestone research shows that 69 percent of organizations already have detailed principles in place for the responsible use of artificial intelligence (AI) and video technology. The key differentiator is how these policies translate into actionable practices.

Companies selling the technology can begin shaping responsible policy and culture by incorporating human rights clauses into all contracts with end users, partners, and technology providers. These agreements should outline the commitment to responsible use and include provisions for terminating relationships in case of serious violations.

Additionally, end-user organizations themselves need internal policies covering responsible technology use, AI deployment, and data handling practices.

Another important metric is training programs on responsible technology principles, bias recognition, and ethical decision-making to foster a genuine understanding of why responsible technology matters. For developers, training should focus on recognizing and preventing unconscious bias in AI systems—that’s a complex challenge that requires ongoing education and awareness.

Measuring Implementation and Impact

When we talk about measuring our commitment to responsible technology, it starts with being open about how we handle data. We found that the best approach is to document everything from how we collect and use data to the specific steps we take to protect it. Regular audits are important, but when it comes to AI systems, we need to go further. We must test for potential biases and document how we address any issues we find. This isn’t just about checking boxes; it's about proving that we're actively working to prevent problems before they occur.

When it comes to video security, we need clear ways to measure responsible use. From our experience, this starts with being transparent about how, where, when and for what video data is used. Just as important is having controls over who can access the video, both live and recorded. Sometimes we also need to simply step back a little and ask: Does this deployment really match the actual security risk?

An approach we’re taking to protect privacy in AI development is the use of synthetic data. To address the challenge of ethically sourcing training data, we’ve built a virtual city in our research department. This can let us train AI models using simulated scenarios instead of real people’s private data. It’s a practical solution that not only protects privacy but also helps minimize bias. For example, when dealing with technical challenges like accurately detecting people in different lighting conditions, synthetic data will let us create countless variations to improve accuracy without compromising anyone’s privacy.

Building Trust Through Verification

Third-party validation plays an essential role in demonstrating a commitment to responsible technology. Suppliers should seek out and maintain relevant certifications, participate in industry standards development, and engage with independent auditors to verify their practices. End-user organizations and their security leaders should look for suppliers that have signed onto frameworks like the G7 Code of Conduct on AI and the EU-AI Pact, as these commitments require specific actions and accountability measures that demonstrate a vendor's dedication to responsible technology use.

While our industry faces many unique challenges in working with governments and regulators, the physical security industry is just a fragment of the entire technology sector, and we don’t have the resources for massive branding campaigns like the tech giants. That’s why security leaders should actively engage with U.S., UK, and EU legislators to help shape practical, effective regulations that can drive global standards.

In the European Union, companies are already preparing for the EU AI Act, which will require extensive documentation and testing of AI systems. Forward-thinking organizations are developing metrics now to track their readiness for these requirements, including documentation of AI system accuracy, regular bias assessments, and transparency reporting mechanisms.

Consider a practical example: when implementing facial recognition systems for security applications, organizations can establish a “four-eyes principle” where two trained professionals must verify any match before action is taken. This approach combines the efficiency of AI with human judgment, and its effectiveness can be measured through false-positive rates, verification time, and accuracy improvements over AI-only systems.

Measuring Engagement and Value

The development of transparent reporting mechanisms provides another important metric for responsible technology. Organizations should establish reporting that document not only successes but also challenges and lessons learned. This might include quarterly reviews of technology deployment impacts, stakeholder feedback sessions, and public reporting on responsible technology initiatives. There are many possibilities.

The return on investment in responsible technology requires a long-term view. Currently, many public sector tenders still weigh price at a significant (i.e. determining) percentage of the decision, while responsible use might only account for 10 to 20 percent. This creates real challenges for companies committed to responsible innovation. However, we're seeing a generational shift.

Younger employees—Generation Z and Millennials—are bringing stronger purpose-driven expectations to the workplace. A recent report found that nine of 10 Gen Zs and Millennials say purpose is important to their job satisfaction, and they are increasingly likely to turn down work or employers that don’t align with their values. This shift in workforce values, combined with growing public awareness of technology’s impact, suggests that responsible use will become increasingly crucial for market access and business sustainability.

Charting the Path Forward

If you’re a security leader putting these measures in place, you can’t just focus on the technical, you need to balance it with the ethical, too. Real success comes when you go beyond the rules and regulations and show that you truly care about doing the right thing and how technology affects society and people’s lives.

Moving forward, we need to look at the complete picture, both the numbers we can measure and the human elements we observe—everything from privacy protection to how well a company engages with its partners and customers. Customers can help shape this picture by actively participating in stakeholder feedback sessions, documenting their experiences with responsible technology implementations, and sharing their requirements for privacy protection and ethical use during the procurement process.

For the developer, success requires understanding the complex interplay between three key stakeholders: society, policymakers, and industry. Politicians often focus more on technology’s risks than its benefits, while businesses must balance innovation with responsibility. What’s needed is more open dialogue to develop regulations that focus on technology use rather than the technology itself. This would help build trust while creating meaningful products that truly serve society’s interests.

End users have a powerful role to play in shaping the future of responsible technology. By making responsible use a key criterion in their procurement processes, requiring transparency in AI deployments, and actively engaging with technology providers on ethical considerations, security leaders can drive meaningful change across the industry. When customers demand and reward responsible practices, they not only protect their own organizations but help create an environment where responsible innovation becomes the standard, not just an aspiration.

While we can measure many aspects of responsible technology, we can't put a number on our most important obligation—protecting basic human rights and the people of the societies we serve. As our industry continues to evolve, I believe the companies that will thrive are those that can demonstrate their genuine commitment to responsible innovation. It's not enough to just say we’re doing the right thing; we need to prove it through concrete actions and measurable results.

At the end of the day, being responsible isn’t just about maintaining our license to operate; it’s about earning the trust of the societies and people we serve.

 

Danish Thomas Jensen joined Milestone Systems headquarters in Copenhagen, in October 2020 as CEO. Prior to that he spent two years in Barcelona, Spain, as executive vice president at Bechtle, five years in Silicon Valley in California in various management roles within the technology industry, e.g. as head of worldwide channel sales strategy at HP and before that various leadership positions at Danish companies Vestas and Maersk. Jensen has solid experience with responsible technology, international expansion, channel development, and go-to-market strategies.

 

arrow_upward