Skip to content

Illustration by iStock; Security Management

EU Passes Artificial Intelligence Act

European Union (EU) legislators voted on a final version of an artificial intelligence (AI) law that regulates the various products or services that rely on AI, identifying four categories of potential risk.

The Artificial Intelligence Act received final approval from the European Parliament on 13 March, and the new rules are slated to officially become a law later this year, in either May or June, after a final check and formal approval from the European Council.

The new law “aims to protect fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field,” according to a press release issued on 13 March by the European Parliament. Parliament’s vote passed the law, with 523 members for, 46 against, and 49 abstaining. 

Those applications in the low-risk category are exempt from regulatory control and the companies behind such applications can independently decide whether to follow voluntary requirements and codes of conduct. Examples of low-risk applications of AI include content recommendation systems, spam filters, and AI use in video games.

From there, the categories identify applications and products that increase in risk—and therefore also increase in scrutiny under the new regulations.

High-risk applications include AI used in medical devices or critical infrastructure systems, such as electrical grids or water networks. These types of systems will have to use high-quality data and provide clear information to users.

Unacceptable-risk applications are banned under the AI Act, TIME reported. Examples of banned systems or applications include social scoring systems that govern how people behave, some types of predictive policing, emotion recognition systems in schools and workplaces, and most instances of law enforcement using AI-reliant biometric identification systems to scan faces in public. Exceptions to this ban include when there is a link to a serious crime, such as terrorism or kidnapping—these exceptions are further limited to a specific time and place. AI that manipulates human behavior or would exploit someone’s vulnerabilities is also considered unacceptable.

The new law’s provisions will become effective in stages, with the 27 nations making up the EU required to ban AI uses deemed unacceptable within six months of its enactment. Each nation will be responsible for creating regulatory sandboxes and real-world testing.

“Some EU countries have previously advocated self-regulation over government-led curbs, amid concerns that stifling regulation could set hurdles in Europe’s progress to compete with Chinese and American companies in the tech sector,” CNBC reported. “Detractors have included Germany and France, which house some of Europe’s promising AI startups.”


For more about AI and its applications and risks within the security space, check in on 1 April for the forthcoming edition of Security Technology.