Skip to content
Menu
menu

Illustration by Security Management; iStock

Balancing AI Innovation and Regulation: The Potential Impact on U.S. Security Companies and Practitioners

Global regulations around artificial intelligence (AI) technologies are evolving rapidly. The European Commission led the charge in 2021 by proposing the EU AI Act, the first European Union regulatory framework that establishes clear guidelines for companies developing and deploying AI systems based on the level of risk they pose. The EU reached an official deal on the rules for the AI Act in December 2023. While the U.S. currently lacks comprehensive regulations, it will likely follow the EU’s lead soon as pressure mounts to regulate AI. For the security industry, the rise of AI regulation will significantly impact how technology vendors and users leverage machine learning and other AI capabilities in products and services.

The EU's risk-based framework for AI regulation aims to support trustworthy artificial intelligence that is safe, transparent and respects existing law on fundamental human rights. Under the proposed act, which will potentially be enforced as early as 2025, AI systems are categorized into four levels of risk—low or minimal, limited, high, and unacceptable—with requirements and restrictions increasing accordingly with each level. Before deployment, the highest-risk AI applications must pass a conformity assessment by complying with a range of requirements, including risk management, testing, technical robustness, data training and data governance, transparency, human oversight, and cybersecurity.

By taking a proportional approach based on potential negative impact, the EU seeks to curb harmful AI applications while enabling most beneficial use cases to flourish. Developers of lower-risk AI must also meet transparency obligations to inform users when they are interacting with chatbots or AI systems that generate or manipulate image, audio, or video content. The alignment of innovation and responsibility lies at the heart of Europe's regulatory vision. And while the United States explores its own federal policies, it will likely gravitate toward a similar risk-based model.

AI in the Security Industry

AI and machine learning technologies have become integral to many applications and services within the security industry. The adoption of these technologies in security continues to rise rapidly as the performance of these technologies improves and new applications emerge. The ability to extract insights from massive amounts of data provides security teams with augmented awareness of risks and events.

Video analytics relies on AI for tasks like pattern recognition, facial recognition, object classification, and anomaly detection within video data. Smart cameras equipped with machine learning can identify threats and suspicious behavior in real time. AI also unlocks valuable insights from security data through techniques like predictive analytics. AI technologies are finding use cases across access control, video security, alarm systems, and much more.

While stricter regulations may increase costs associated with developing, testing, and monitoring AI systems, well-crafted policies will push the security industry toward more responsible AI practices that consider ethics and potential harm. This will benefit customers, individuals, and businesses over the long term as potential negative impacts are minimized. Regulations can also improve public trust and adoption of AI solutions as the technologies become more accountable, fair, and transparent in their workings and decisions.

How U.S. Companies Can Prepare

To prepare for impending AI regulations, security companies should track the implementation of the EU's AI Act and begin anticipating similar policies in the United States. Investing now in processes for developing responsible AI supported with robust testing, responsible data practices, and continuous monitoring will ease the transition later.

Within today’s emerging technology regulation environment, it would be prudent for any AI development company to align their development processes with these four pillars as a responsible framework:

Security and privacy by design and default. When creating AI systems, developers should prioritize user privacy and security from the start. This approach is outlined in GDPR Article 25, which requires data protection by design and by default. It means building safeguards into the technology to protect personal data, limit data collection and use, and ensure ethical practices. The goal is to create AI that respects privacy while still functioning properly.

Human rights by design. When creating AI systems, developers should assess the potential impact on human rights and build safeguards into the technology to avoid negative consequences. This human rights by design approach involves being transparent about how AI systems make decisions affecting people's lives and proactively identifying any risks to rights like privacy, freedom of expression, or fair treatment. The goal is to create AI that respects human rights principles from the start of the design process.

Transparency. AI transparency helps ensure that all stakeholders—including users, regulators, and society at large—can clearly understand the inner workings and decision-making processes of an AI system. It involves being open about how the system was developed, how it was trained, what data is used, and how outcomes are reached. Transparent AI development procedures are crucial for upholding principles of responsible and safe technology, and respect for human rights.

Fairness and inclusion. Fairness in responsible AI means ensuring the technology works equally well for all people regardless of individual or group characteristics like race, gender, age, or disability status. This requires proactively testing systems for biased or discriminatory outcomes against marginalized groups and designing AI models that account for diversity from the start. Considering inclusiveness also means making AI accessible and useful for people with a range of abilities.

Here are some key action points developers can start doing now in preparation:

  • Carefully evaluate data sources and processing for biases or other discrimination issues that could propagate through machine learning models.
  • Invest in rigorous testing and validation of models prior to deployment and monitor them continuously after launch.
  • Develop well-documented processes for responsible AI to demonstrate accountability.
  • Treat evolving oversight as an opportunity for experimentation and ethical differentiation and to get ahead of AI regulations as a leader in the field.

In general, AI solution companies need to adopt a responsibility-by-design approach to development by building ethics, fairness, and transparency very early within the design process. Companies should also explore partnerships with universities, non-profits, and vendors devoted to responsible AI to help facilitate regulatory dialogue and awareness.

Implementing robust internal processes leads to effective preparation, but avoiding unnecessary risk matters, too. The combination of innovation and responsibility strikes the right balance. Seek out creative ways to deliver value with AI while respecting human rights boundaries and serving the public good. Sustaining such a vision will keep security companies competitive amidst regulatory changes on the horizon.

Balancing Innovation and Responsibility

U.S. regulations on AI systems are imminent. On 30 October 2023, President Joe Biden signed an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Briefly stated, the order takes three main actions:

  • Establishes an AI Bill of Rights that sets out principles for how AI systems should be designed, developed, and used to respect people's civil rights and protect their privacy and data.
  • Directs federal agencies to assess AI risks, set standards for trustworthy AI, and publish reports on how they are using and protecting against misuse of AI.
  • Creates an interagency work group on AI to coordinate efforts across federal agencies and share best practices for AI safety, ethics, and oversight.

In line with legislation from other countries, the main actions of this U.S. Executive Order are focused on protecting people's rights and privacy, requiring federal agencies to be more transparent and responsible in their AI use, and improving coordination on AI policy across the government.

While new regulations will require changes in how AI is built, tested, validated, and monitored, this does not need to stifle progress. By proactively implementing strong responsible AI practices throughout the development process, firms can continue pushing boundaries while prioritizing transparency and accountability.

The regulations on AI emerging in the EU and United States signal a focus on ensuring AI systems are trustworthy, secure, and developed responsibly with attention to mitigating harms. For security technology users, this means potentially more transparency from developers on how AI systems were built, their limitations, and the safeguards put in place. Users could see improved tools that better respect privacy and civil liberties protections.

To promote effective and responsible AI, users should communicate their needs and concerns with developers, while developers should engage with users early on with development considerations and involve them in planning. Through collaboration rooted in shared values of responsibility and security, users and the tech sector can build an AI ecosystem that balances innovation's benefits with societal wellbeing.

With foundations like responsible data practices, rigorous testing, continuous monitoring, and a commitment to transparency firmly in place, security organizations can adopt AI confidently, safely, and responsibly. AI governance remains a journey—not a destination. But companies willing to evolve will thrive in the new era of AI regulation. Prioritizing responsible innovation practices and accountability today will enable security providers to unlock the immense potential of artificial intelligence for better protecting people and property tomorrow.

 

Rahul Yadav is the chief technology officer at Milestone Systems. He brings extensive experience building, transforming, and leading “Tech & Product” teams in B2B & B2C organizations across a diverse set of industries, including media, public IT, consumer electronics, and telecommunications. Prior to joining Milestone, Yadav held various leadership roles at Bang & Olufsen, TV 2 Danmark, KMD A/S, Texas Instruments, and Samsung Electronics. He holds a Global Executive MBA from INSEAD in Fontainebleau, France, and a Master of Technology (M.Tech.) in Digital Communication from the National Institute of Technology, Bhopal, India. Combining a strong track record as a technologist with solid leadership acumen, Yadav is well-equipped to ensure the continuous development and growth of Milestone's technology and products portfolio, as well as the organization.

 

For more about how artificial intelligence is affecting the security space, stay tuned for Security Technology's April 2024 issue about AI. Visit the Security Technology archives here

 

arrow_upward