Skip to content
Menu
menu

Illustration by iStock

EU Reaches Deal for Comprehensive AI Rules

After a marathon debate, negotiators in the European Parliament and representatives from the bloc’s 27 member countries reached a provisional agreement about the Artificial Intelligence Act last week. The agreed-upon regulation is risk-based, establishing rules of AI application based on its potential risks and level of impact, banning some applications while restricting others.

The comprehensive AI rules pave the way for “legal oversight of AI technology that has promised to transform everyday life and spurred warnings of existential dangers to humanity,” TIME reported. The rules are expected to serve as a blueprint for other nations and continents as they hash out what AI regulations mean.

The negotiators had to overcome big differences on generative AI and police use of facial recognition surveillance for the agreement, balancing the needs for privacy, safety, and innovation.

The accord requires foundation models such as ChatGPT (which other AI systems can leverage to build tools) and general purpose AI systems to comply with transparency obligations before being put on the market, Reuters reported. High-impact models with systemic risk—such as those used in critical infrastructure, medical devices, and law enforcement—will face more stringent requirements, including conducting model evaluations, assessing and mitigating risks, conducting adversarial testing or red teaming, and reporting on energy efficiency.

Applications that pose limited risk, such as content recommendation system or spam filters, would only need to follow light rules, such as revealing that they are powered by AI, the Associated Press explained.

Surveillance tools are strictly curtailed under the new rules. Governments are only allowed to use real-time biometric surveillance in public spaces in the cases of certain crimes (such as kidnapping); the prevention of genuine, present, or foreseeable threats (such as terrorism); and searches for people suspected of extremely serious crimes.

Customers have the right to launch complaints and receive meaningful explanations for AI use, and fines for violations would range from 7.5 million euros or 1.5 percent of turnover to 35 million euros or 7 percent of global turnover.

Experts in the EU began hashing out details of the AI Act—including the scope of the laws and how they will work—on 12 December, and more than 11 technical meetings are scheduled over the next few months, Reuters reported. The European Parliament will need to vote on the act in early 2024, but it’s considered largely a rubber stamp exercise since the deal is already done. The law wouldn’t fully take effect until 2025 at the earliest.

Which AI Applications are Banned?

Co-legislators agreed to prohibit:

  • Biometric categorization systems that use sensitive characteristics, including political, religious, or philosophical beliefs or affiliations, sexual orientation, and race
  • Untargeted scraping of facial images from the Internet or video surveillance footage to create facial recognition databases
  • Emotion recognition in the workplace and educational institutions
  • Social scoring based on social behavior or personal characteristics
  • AI systems that manipulate human behavior to circumvent their free will
  • AI used to exploit individuals’ vulnerabilities due to their age, disability, or socioeconomic situation

What Counts as High-Risk?

AI systems and applications that are deemed high-risk by the new rules face increased scrutiny and more strict requirements. They will need to demonstrate high levels of cybersecurity, risk assessment and mitigation, and due diligence before coming onto the market. These systems and applications will also need to be registered in a public EU database, according to a fact sheet from the International Association of Privacy Professionals (IAPP).

These systems and applications include:

  • Medical devices
  • Vehicles
  • Recruitment, HR, and worker management tools
  • Education and vocational training
  • Influencing elections and voters
  • Access to services, such as banking, insurance, credit, or benefits
  • Critical infrastructure management
  • Emotion recognition systems
  • Biometric identification
  • Law enforcement, border control, migration, and asylum
  • Administration of justice
  • Specific products and/or safety components of specific products

What Has the Reaction Been?

Privacy, security, technology, and legal advocates have spoken out after the deal was announced, all speculating on what the AI Act will mean for their industry and the world.

The Security Industry Association (SIA) said that “avoiding a sweeping, categorical ban on biometric identification systems, particularly facial recognition technology, is a step in the right direction.” In addition, SIA “welcomes the refinement of the restrictions on AI systems used for categorization and other analytics to more specific use cases of concern, and that inherently low-risk applications of biometric technologies for user verification and similar functions are not subjected to high-risk requirements.”

Privacy expert and managing director of IAPP’s Artificial Intelligence Governance Center Ashey Casovan noted in emailed commentary that, “The past few days’ marathon trilogue negotiations signal the monumental nature of the EU AI Act. It will have a massive impact on all aspects of the global digital economy. The GDPR [General Data Protection Regulation] changed the digital economy—not just in terms of fines and shifts in compliance, but in terms of business models. We expect the impact here will be bigger. The EU AI Act will require greater efforts to operationalize than the GDPR, demanding organizations address the risks of powerful new technologies that they are just beginning to understand and implement. Hundreds of thousands of AI governance professionals are urgently needed to ensure AI systems are developed, integrated, and deployed in line with the EU AI Act and emerging AI laws globally.”

Some groups claim that the regulation could stifle innovation within the EU. According to Alberto Di Felice from DIGITALEUROPE, “Because this is placement-on-the-market legislation, that means a lot of innovative products won't make it to market. That's not only chatbots, but also critical stuff like medical devices and industrial machines. I expect this to be the beginning of our process to realize how onerous this will all be, and of hard work in the coming years to solve some of the challenges—and possibly correct some of the mistakes we’ve made,” he told IAPP.

In a press statement, Information Technology and Innovation Foundation Vice President Daniel Castro said that the AI Act is premature. “Given how rapidly AI is developing, EU lawmakers should have hit pause on any legislation until they better understand what exactly it is they are regulating,” he said. “There is likely an equal, if not greater risk of unintended consequences from poorly conceived legislation than there is from poorly conceived technology. And, unfortunately, fixing technology is usually much easier than fixing bad laws.

“The EU should focus on winning the innovation race, not the regulation race,” Castro continued. “AI promises to open a new wave of digital progress in all sectors of the economy. But it is not operating without constraints. Existing laws and regulations apply, and it is still too soon to know exactly what new rules may be necessary. EU policymakers should re-read the tale of the tortoise and the hare. Acting quickly may give the illusion of progress, but it does not guarantee success.”

Some privacy advocates are unhappy that the AI Act does not wholly ban facial recognition applications in video surveillance.

“It’s hard to be excited about a law which has, for the first time in the EU, taken steps to legalize live public facial recognition across the bloc,” Ella Jakubowska, senior policy advisor at privacy rights group European Digital Rights, told Reuters. “Whilst the Parliament fought hard to limit the damage, the overall package on biometric surveillance and profiling is at best lukewarm.”

 

arrow_upward