EU Commission Unveils Regulations for High-Risk Artificial Intelligence Technologies
The European Union released a proposed set of artificial intelligence (AI) regulations this week that would create rules for the development of high-risk uses of the technology and ban some uses outright.
The draft regulations are the first of their kind in the world and are part of what EU officials call a four-level “risk-based approach” to balance privacy rights against the need for innovation.
“With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted,” said Margrethe Vestager, the European Commission’s executive vice president for the digital age, in a statement. “By setting the standards, we can pave the way for ethical technology worldwide and ensure that the EU remains competitive along the way. Future-proof and innovation-friendly, our rules will intervene where strictly needed: when the safety and fundamental rights of EU citizens are at stake.”
We want to use more #TrustworthyAI! For health, fighting climate change, convenience in everyday life -if we can trust #AI not to put our fundamental rights at risk. This is today’s proposal - and that we become excellent in developing #TrustworthyAI https://t.co/eocNIph1TN— Margrethe Vestager (@vestager) April 21, 2021
The proposed regulations ban AI systems that are considered a threat to people’s safety, livelihood, or rights, such as AI systems or applications that manipulate human behavior to circumvent free will or systems that allow social scoring by governments.
Other AI that falls into the high-risk category under the EU’s definition includes systems or applications used for critical infrastructure, educational or vocational training, safety components of products, employment, essential private and public services, law enforcement, border control management, and administration of justice and democratic processes.
Systems or applications that are considered high-risk will need to undergo a series of obligations before they can be put on the market: adequate risk assessment and mitigation systems must be in place; high quality datasets must be used to limit discrimination; logging of activity to ensure traceability of results; detailed documentation of the system; clear and adequate information on the system provided to users; appropriate human oversight in place to minimize risk; and high levels of robustness, security, and accuracy.
“In particular, all remote biometric identification systems are considered high risk and subject to strict requirements,” according to an EU Commission press release. “Their live use in publicly accessible spaces for law enforcement purposes is prohibited in principle. Narrow exceptions are strictly defined and regulated (such as where strictly necessary to search for a missing child, to prevent a specific and imminent terrorist threat, or to detect, locate, identify, or prosecute a perpetrator or suspect of a serious criminal offense).”
To supervise the new proposed rules, the EU Commission has suggested that member states’ national competent market surveillance authorities supervise the new rules. It will also create a European Artificial Intelligence Board to facilitate their implementation and drive additional development standards for AI. Individual violators could be fined up to 30,000 euros (approximately $36,000); corporate violators could be fined up to 6 percent of their global annual revenue—whichever is higher—if they did not fix their products or remove them from the marketplace.
“AI is a means, not an end. It has been around for decades but has reached new capacities fueled by computing power,” said EU Commissioner for Internal Market Thierry Breton in a statement. “This offers immense potential in areas as diverse as health, transport, energy, agriculture, tourism, or cybersecurity. It also presents a number of risks. Today’s proposals aim to strengthen Europe’s position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use.”
Friend or foe?— Thierry Breton (@ThierryBreton) April 21, 2021
Here’s how 🇪🇺 will regulate #ArtificialIntelligence:
✔️invest in EU excellence
✔️ensure a predictable legal framework for startups & companies to make the most out of industrial potential
✔️protect consumers & citizens#AI #Trust https://t.co/XO0h71IYKW pic.twitter.com/213KUKegGT
While the rules are a major step in regulating the use of AI that must be formally adopted by the European Parliament and European Council before going into effect, they also have loopholes that could leave citizens vulnerable, according to Ella Jakubowska, policy and campaigns officer at European Digital Rights, in an interview with WIRED.
“The proposed regulations, suggest, for example, prohibiting ‘high risk’ applications of AI, including law enforcement use of AI for facial recognition—but only when the technology is used to spot people in real time in public spaces,” according to WIRED. “This provision also suggests potential exceptions when police are investigating a crime that could carry a sentence of at least three years.
“So Jakubowska notes that the technology could still be used retrospectively in schools, businesses, or shopping malls, and in a range of police inquiries.”
We are living in the middle of a new arms race between global powers as technologists, scientists, and military leaders work to develop artificial intelligence (AI) applications. https://t.co/13NTj97wGT— Security Management (@SecMgmtMag) April 1, 2021
If approved, the regulations would apply inside and outside of the EU if the system or application affects people in the EU. The potential adoption of the regulations could be seen as a “third way” for Europe, which is competing with China and the United States on setting AI policy for the world.
“As Vestager noted at the press conference, the bloc wishes to distinguish itself with fair and ethical applications of AI, and this proposal is still the biggest step toward regulation in line with those values,” The Verge reports.