Understanding the EU AI Act: A Security Perspective
Almost six years after the European Union (EU) set the global standard for privacy regulation, it’s poised to make similar moves to regulate artificial intelligence (AI) systems and technologies.
The EU AI Act was originally proposed in April 2021 before being endorsed by the European Parliament on 13 March 2024 (523 votes in favor, 46 against, and 49 abstentions).
As of late March, the act was in its final review stage before becoming law and member states issuing guidance on its implementation.
“Considering the significant majority in the European Parliament vote, we do not foresee any member states withholding approval of the act,” says Dave McCarthy, program manager, government relations, Axis Communications, which is headquartered in Sweden. “Throughout the coming months, we will closely monitor the implementation of the EU AI Act, including the delegating acts and the emergence of new standards.”
Dragos Tudorache, civil liberties committee co-rapporteur and MEP representing Romania, said in a statement that the EU has now linked the concept of AI to the fundamental values that form the basis of member states’ societies.
“However, much work lies ahead that goes beyond the AI Act itself,” Tudorache said. “AI will push us to rethink the social contract at the heart of our democracies, our education models, labor markets, and the way we conduct warfare. The AI Act is a starting point for a new model of governance built around technology. We must now focus on putting this law into practice.”
Alongside the EU’s AI Innovation Package and Coordination Plan on AI, the AI Act will help guarantee the safety and fundamental rights of people and businesses in relation to technology.
“The AI Act is the first-ever comprehensive legal framework on AI worldwide,” according to the European Commission. “The aim of the new rules is to foster trustworthy AI in Europe and beyond, by ensuring that AI systems respect fundamental rights, safety, and ethical principles, and by addressing risks of very powerful and impactful AI models.”
AI will push us to rethink the social contract at the heart of our democracies, our education models, labor markets, and the way we conduct warfare.
The act covers entities in the EU but also applies to providers and deployers of AI systems outside the bloc who could be contracted to process data collected in and transferred from the EU. Lawmakers crafted the act in this way to “prevent the circumvention of this regulation,” the text of the AI Act explains.
The act does carve out exemptions for arrangements with public authorities in third countries who are working to carry out tasks in support of law enforcement or judicial cooperation. The act also exempts providers or deployers of AI systems used solely for military, defense, and national security purposes.
The move positions the EU as a “trailblazer in establishing regulatory frameworks for AI,” says Chad Lesch, senior vice president, strategic projects, at Crisis24.
“The legislation employs a risk-based methodology, categorizing AI systems by their potential hazards and enforcing more stringent regulations on those deemed higher-risk,” he adds. “This approach seeks to harmonize technological advancement with the safeguarding of individual rights and safety, potentially influencing international norms and encouraging non-EU AI entities to adopt similar standards of self-regulation.”
The Security Baseline
For security practitioners, it’s especially important to understand how the EU AI Act defines terms and practices that are often part of their profession:
- Biometric categorization system: An AI system for the purpose of assigning people to categories based on their biometric data.
- Biometric identification: Automated recognition of physical, physiological, behavioral, or psychological human features to identity people by comparing biometric data of an individual to biometric data of individuals stored in a database.
- Biometric verification: The automated, one-to-one verification—including authentication—of the identity of people by comparing their biometric data to previously provided biometric data.
- Emotion recognition system: An AI system used to identify or infer emotions or intentions of people based on their biometric data.
- Sensitive operational data: Data related to activities of prevention, detection, investigation, or prosecution of criminal offenses, the disclosure of which could jeopardize the integrity of criminal proceedings.
- Publicly accessible space: Any publicly or privately owned physical place accessible to an undetermined number of people.
- Remote biometric identification system: An AI system used to identify people without their active involvement, typically at a distance, by comparing their biometric data with biometric data in a reference database.
- Real-time remote biometric identification system: A remote system where capture, comparison, and identification of biometric data occur without a significant delay, or a short delay, for instant identification.
Another concept that security practitioners should already be familiar with is taking a risk-based approach, which is exactly what the EU AI Act does when it comes to regulating AI. It seeks to create obligations for technology based on its potential risks to humans.
- Minimal risk: No obligations for AI systems posing low risks.
- Limited risk: Transparency requirements for AI systems that interact with humans and generate content.
- High risk: Regulation of systems that could create adverse impacts to people’s safety or fundamental rights.
- Unacceptable risk: Banned harmful AI practices considered to be a clear threat to people’s safety, livelihoods, or rights.
Most AI systems currently used in the EU fall into the Minimal risk category and have no additional obligations under the EU AI Act. For limited risk AI applications, such as chatbots, the AI Act introduces transparency requirements to make humans aware they are interacting with a chatbot. Providers must also label text, audio, and video content that is generated using AI.
Mark Mullison, chief technology officer for Allied Universal, says he finds the risk-based approach the EU is taking very interesting.
“I think it’s a useful way to look at things and, based on where AI and various AI models or systems find themselves in that hierarchy, it attracts progressively more oversight and regulation, and even in the top instance prohibits the use,” Mullison says. “It’s a very interesting approach. It’s very well thought out, very thorough, so we’ll see how it plays out.”
Key to this approach is focusing on how a particular AI system is used and the potential risk it poses. Take predictions, for instance, which security practitioners have leveraged when it comes to resource allocation or staffing decisions.
“If that predictor is telling you whether a client is going to dispute an invoice, well that’s pretty low risk and doesn’t really attract much oversight,” Mullison explains. “If that same technique is used not to predict the late payment of an invoice, but to predict whether somebody would be a good fit for a job—well now that raises the classification in the risk hierarchy and attracts more attention. It really depends on the application.”
Unacceptable Risks
AI systems that are a “clear threat to the safety, livelihoods, or rights of people will be banned, from social scoring by governments to toys using voice assistance that encourages dangerous behavior,” according to the EU Commission.
Looking at the EU AI Act text through a security lens, the legislation bans certain AI applications from the EU marketplace that could be used to categorize people.
Some unacceptable AI applications—such as social scoring systems—use data outside the context it was originally gathered for, and they could lead to detrimental or unfavorable treatment of people. AI systems used for risk assessments may also fall into the unacceptable risk category if they are used to assess an individual’s likelihood of committing a criminal offense. The act makes an exception, however, for using these types of systems to support a human assessment of a person involved in criminal activity.
Other prohibitions include banning AI systems that create or expand facial recognition databases by untargeted scraping of facial images from the “Internet or CCTV” footage because this practice “adds to the feeling of mass surveillance and can lead to gross violations of fundamental rights, including the right to privacy,” the act explained.
Technology regulation requires a balance between encouraging innovation and protecting the public, says Fredrik Nilsson, vice president, Americas, Axis Communications.
“When it comes to video surveillance, the EU AI Act places some restrictions on using facial recognition in public places, similar to what we have seen with some U.S. states and cities,” Nilsson adds. “It is good to see that some distinctions have been made based on applications and not the technology. It’s important to remember that facial recognition is used by most of us every day in applications like Face ID and for business operations, like airport security and border control.”
Also on the unacceptable risk list are AI systems used to infer peoples’ emotions in the workplace or in an educational institution, except when used for medical or safety reasons.
A major area for security practitioners to review is their use of biometric categorization systems. These now fall into the unacceptable risk category if they use people’s biometric data to infer their race, political opinion, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation. There is an exemption for law enforcement’s lawfully acquired biometric datasets, including images.
Quang Trinh, business development manager, platform technologies, Axis Communications, says it’s difficult to tell how the EU AI Act will affect the deployment and use of AI-based biometric identification systems.
“That said, biometric data is used in many consumer and commercial systems, so I expect that there will be increased feedback from private entities during the implementation of the law,” adds Trinh, who is also co-chair of the ASIS International Emerging Technology Community Steering Committee. “This discourse is sure to encourage risk assessments and address privacy concerns in order to ensure safe and lawful use of biometric data as a component of a safety and security system.”
Under the act, using AI systems for “real-time” remote biometric identification of people in publicly accessible spaces for law enforcement purposes is generally prohibited. But there are exceptions to this rule, too. Law enforcement can use real-time identification in defined situations—such as searching for victims of crime, like reported missing people and human trafficking victims; threats to the life or physical safety of people, including terrorist attacks; and identifying perpetrators of designated criminal offenses—with authorization from a judicial authority or an independent administrative authority.
Law enforcement is also limited to deploying these systems to confirm their targets’ identity, with additional limits on the time, geography, and personal scope for the system’s use.
“The use of the real-time biometric identification system in publicly accessible spaces should be authorized only if the relevant law enforcement authority has completed a fundamental rights impact assessment and…registered the system in the database as set out in this regulation,” according to the act.
National market surveillance authorities and national data protection authorities are then required to submit annual reports to the EU Commission about how law enforcement is using real-time biometric identification systems.
It’s unclear how these provisions will affect private security activities today, but Mullison says that it might influence how people are exploring using more advanced methods of security.
“For instance, some of the most advanced security programs try to understand individuals and behavior,” he explains. “If you start to, through your video analytics, look for people who are agitated or look for people who fit certain characteristics, that likely drops the application into the higher-risk category and either prohibits it or attracts a lot of overhead and oversight, depending on the specifics of what’s going on.”
High Risks
The EU AI Act categorizes High risk AI systems partly due to the sector they are used in. AI technology used in critical infrastructure, educational or vocational training, safety components of products, employment, essential private and public services, law enforcement, border control management, and administration of justice and democratic processes could be considered High risk.
These types of AI systems may only be placed into the EU market and used if they comply with mandatory requirements, including being subject to a risk management system, data governance requirements, technical documentation mandates, record-keeping for their lifetime, transparency with deployers requirements, human oversight measures, and accuracy, robustness, and cybersecurity requirements.
These requirements also impact robotics that use AI to move or complete tasks, which will now be subject to certain high-risk requirements.
“For instance, increasingly autonomous robots, whether in the context of manufacturing or personal assistance and care should be able to safely operate and perform their functions in complex environments,” the act explained.
Systems that rely on biometric data are classified as high risk because they contain sensitive personal data. If the system produces an inaccurate result, for example, it can lead to biased or discriminatory effects for an individual. But this classification is not universal.
“Biometric systems which are intended to be used solely for the purpose of enabling cybersecurity and personal data protection measures should not be considered to be high-risk systems,” the act clarified.
Classifying biometric identification systems as high-risk will significantly impact their use in Europe, Lesch says.
“Companies will face stricter compliance, leading to higher costs and a need for more robust oversight,” he explains. “The act limits biometric use in public spaces, pushing firms to seek alternative security methods or innovate within compliance boundaries.”
Lesch adds that this might help larger entities that can absorb higher compliance costs but could sideline smaller players in the market.
The act limits biometric use in public spaces, pushing firms to seek alternative security methods or innovate within compliance boundaries.
“However, these regulations could also boost consumer trust by ensuring biometric technologies are used transparently and securely, aligning with privacy and ethical standards,” Lesch says.
AI systems that manage or operate critical infrastructure are also considered high-risk systems because their potential failure would put people’s lives and health at risk at a large scale. But the act does carve out an exemption for components used “solely for cybersecurity purposes” to not be qualified as safety components, and therefore not considered high-risk systems.
“Examples of safety components of such critical infrastructure may include systems for monitoring water pressure or fire alarm controlling systems in cloud computing centers,” the act added.
Employment-related AI systems also meet the high-risk mark, particularly those systems used for recruitment, promotion, and termination processes because they could impact an individual’s future career prospects, their livelihood, or workers’ rights.
Additionally, AI systems that are used to classify and evaluate emergency calls—such as to establish priorities for dispatching emergency response services—meet the high-risk threshold because these systems are used in critical situations for the life and health of people and their property.
View this post on Instagram
Complying with the Act
At Crisis24, which is owned by Canada-based GardaWorld, Lesch provides a streamlined look at how the company will maintain compliance with the EU AI Act when it comes into effect. This will include conducting risk assessments to determine if any of its AI systems fall under the high-risk category and developing a comprehensive strategy to align all AI-related operations with the act’s requirements.
“This might include revising AI deployment strategies, maintaining our platinum level data protection, and implementing transparent AI decision-making processes,” he adds.
Crisis24 will enhance its data governance protocols to comply with the act’s provisions on data quality, storage, and processing to ensure AI systems are used in a lawful, transparent, and secure manner. Lesch says that the company will maintain its ethical AI framework and continue training programs to ensure employees are aware of the act’s requirements and compliance procedures.
Additional efforts will include ensuring third-party vendors are compliant with the act, setting up mechanisms for ongoing monitoring of the company’s AI systems’ compliance, and continuing to engage with legal experts to update Crisis24’s compliance measures so they evolve with the implementation of the AI Act.
While Allied Universal is a U.S. company, it does business in 90 countries around the world—including EU member states—and has more than 800,000 employees. When it acquires AI technology or builds and applies its own, compliance with the EU AI Act is now something that will have to be considered, Mullison says.
“We’re not terribly concerned with that because we have tried since the beginning to be transparent, ethical, and establish a governance process that internally makes sure that we’re doing the right things,” he adds.
Allied has an internal governance process that involves stakeholders from its Legal, HR, Compliance, Operations, and Technology teams getting together to discuss and evaluate AI initiatives, ensuring they meet its requirements and match its ethical approach to AI.
For instance, Allied began using AU Hire Smart—an AI model that helps screen job applicants—in 2020 just as the COVID-19 pandemic began. When applying for a position, applicants can schedule a traditional in-person interview, a video interview with another person, or a video interview that is evaluated by the AI system for the best potential job fit. Roughly one-third of applicants select the AI-evaluated interview option, which is designed to help speed up the hiring process.
“What it does is it gets through the screening process and gets somebody to the front of the line,” Mullison says. “We trained the model based on a set of carefully crafted questions, which we asked of several thousand of our existing high-performing security professionals.”
Based on those answers, AU Hire Smart will classify people if they seem like a great fit or not. After the AI screening, a human takes the next steps to move the applicant through the hiring process.
Part of the company’s process is to regularly review the system to ensure it’s not creating an adverse impact on candidates while continuing to make good—and fair—decisions on how to classify candidates for their potential fit.
“There are really two sides to the coin of evaluating the impact of AI,” Mullison says. “There’s a potential negative that you have to make sure that you’ve got structures in place to avoid, but then in doing it you get a lot of positives like understanding consistently that decisions are being made the way you want them to be made.”
Meanwhile at Axis, the company has closely been following the developments of the EU AI Act and providing feedback to authorities and lawmakers, Nilsson says.
“As a global company, Axis is of course intent on abiding by all local and global regulations, and the EU AI Act is no different,” Nilsson adds. “We took a similar approach with GDPR by carefully implementing the EU regulations on a global level.”
Future Ramifications
The EU AI Act will enter into force 20 days after it’s published in the official journal of the EU. Six months after the entry into force, the act’s prohibitions on Unacceptable risk AI go into effect with additional enforcement dates stretching through 2030. Given the breadth of the EU AI Act and the lengthy timeline for implementation, it will take time to understand how it will affect the security landscape in Europe and beyond.
“It’s very broad, it’s very thorough, and on one hand that’s a good thing,” Mullison says. “But whenever you take such a big swing, when that meets the specifics of all the different use cases, the impact remains to be seen.”
Ashley Casovan, managing director of the International Association of Privacy Professionals (IAPP) AI Governance Center, says the AI Act is very specific in certain areas—such as how law enforcement can use biometric systems for policing—but less detailed in others. She anticipates that more information and detail will come out in implementing acts, including the development of standards.
“Given that the EU AI Act identifies that there are different types of AI systems and context that they are being used in, having a one-size-fits-all risk assessment is going to be difficult to create a standard for,” she explains.
The EU Commission has the authority to issue delegated acts on how an AI system is defined, criteria and use cases for High risk AI, and thresholds for general purpose AI models with systemic risk. The commission can also provide guidance on implementation of the requirements for High risk AI, transparency obligations, prohibited AI practices, and more, according to the IAPP.
One area that Lesch says he anticipates changing is how entities will use biometric identification systems. He foresees that European organizations will likely adopt alternative, less intrusive, authentication technologies. These could include mobile-based authentication, physical security tokens, cryptography-based methods, and one-time passwords.
Outside of Europe, Lesch says multinational corporations may find it more practical to adopt uniform AI policies that comply with the strictest regulations that they are subject to, which in many cases might be the EU’s. He also anticipates that the act may influence global supply chains since companies producing AI components or software will need to ensure their products are compliant with the EU’s regulation.
“This may come at a significant cost, or companies might choose it is no longer cost feasible to do business with regions with more stringent regulations,” Lesch adds.
The EU AI Act could also influence the future of where AI research and development are focused.
“This may risk shifting global R&D progress towards countries with less stringent regulations, creating a significant disruption to the geopolitical balance of power across military, economic, and geopolitical arenas,” Lesch says. “While the direct legal jurisdiction of the EU AI Act is limited to the EU, these indirect effects could lead to broader changes in how AI, especially in security and biometrics, is developed and used worldwide.”
Trinh adds that the EU AI Act may create global understanding and help establish international standards for data quality, privacy, transparency, and interoperability.
“What is for sure is that the EU AI Act has put a spotlight on AI, and it will encourage guidelines and regulations beyond the EU,” Trinh says.
This may risk shifting global R&D progress towards countries with less stringent regulations, creating a significant disruption to the geopolitical balance of power across military, economic, and geopolitical arenas.
Additionally, Trinh says that he foresees the EU AI Act having a similar influence on AI policy that the GDPR had on data privacy regulation outside of Europe.
“The world is increasingly more interconnected and technological solutions serve a global audience—so the same high, ethical standards should exist everywhere,” Trinh adds. “Additionally, entities such as the National Institute of Standards and Technology, or NIST, are working with the AI community to build the technical requirements to ensure AI systems are accurate, reliable, and safe. Suffice to say that a normative effect is likely and will follow the emergence of new standards.”
Megan Gates is editor-in-chief of Security Technology and senior editor of Security Management. Connect with her at [email protected] or on LinkedIn. Follow her on Threads or X: @mgngates.