The EU Steps Up with AI Regulations

The European Union recently passed regulations that guide the use of artificial intelligence while staying hands-off on development.

Rob Spiegel

April 23, 2024

5 Min Read
European Union AI regulations
Dragon Claws for iStock / Getty Images Plus via Getty Images

At a Glance

  • EU offers different levels of acceptable AI use.
  • Details of accountability are still in the works.
  • So far, business looks on the regulations favorably.

In March 2024, the European Parliament gave final approval to The Artificial Intelligence Act, a collection of wide-ranging rules designed to govern artificial intelligence. Senior European Union (EU) officials said the rules, first proposed in 2021, will protect citizens from the possible risks of a technology that’s developing at breakneck speed. While the guidelines attempt to govern the use of AI, the EU wants to foster innovation in AI.

Europe’s action covers everything from health care to policing. It imposes bans on some “unacceptable” uses of the technology while offering guards against “high-risk” applications.

Magnus Tagtstrom, VP of emerging tech at Iterate.ai, sees the Act as a positive way of addressing possible negative issues in AI developments. “The EU is trying to reduce the risk of AI without losing its benefits. They don’t want the regulations to hold back the technology,” Tagtstrom told Design News. “It’s more about making sure the use is not harmful and that AI gets tested before it's deployed. Everyone seems to understand that this is a defining technology and that we need to get a handle on it."

Here’s a quick breakdown of the regulations:

The Goal for AI Guards

The EU’s stated priority was to “make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly." The EU also noted that "AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes.”

One defining characteristic of the guidelines is that they seek to mitigate potential harmful affects without stifling innovation. “The Act has to do with the use of technology rather than the development of the technology,” said Tagtstrom. “It covers what products are you developing, what is their best use, and what is their compliance. There doesn’t seem to be any contradiction between what's good in product development and what's in the regulations.”

Determining the Level of Risk

The EU's regulations offer rules for different risk levels.

Unacceptable Risk

AI systems that are deemed to be a threat to people and will be banned. They include cognitive behavioral manipulation of people or specific vulnerable groups:

  • Voice-activated toys that encourage dangerous behavior in children.

  • Classifying people based on behavior, socio-economic status or personal characteristics

  • Biometric identification and categorization of people

  • Real-time and remote biometric identification systems, such as facial recognition

Some exceptions may be allowed for law enforcement purposes. Remote biometric identification systems will be allowed in a limited number of serious cases, while “post” remote biometric identification systems, where identification occurs after a significant delay, will be allowed to prosecute serious crimes and only after court approval.

High Risk

AI systems that negatively affect safety or fundamental rights will be considered high risk and will be divided into two categories:

1. AI systems that are used in products falling under the EU’s product safety legislation. This includes toys, aviation, cars, medical devices, and lifts.

2. AI systems falling into specific areas that will have to be registered in an EU database:

  • Management and operation of critical infrastructure

  • Education and vocational training

  • Employment, worker management and access to self-employment

  • Access to and enjoyment of essential private services and public services and benefits

  • Law enforcement

  • Migration, asylum, and border control management

  • Assistance in legal interpretation and application of the law.

All high-risk AI systems will be assessed before being put on the market and throughout their lifecycle. People will have the right to file complaints about AI systems to designated national authorities.

Transparency Requirements

Generative AI, like ChatGPT, will not be classified as high-risk, but will have to comply with transparency requirements and EU copyright law:

  • Disclosing that the content was generated by AI

  • Designing a model to prevent it from generating illegal content

  • Publishing summaries of copyrighted data used for training

High-impact general-purpose AI models that might pose systemic risk, such as the more advanced AI model GPT-4, would have to undergo thorough evaluations, and any serious incidents would have to be reported to the European Commission.

Content that is either generated or modified with the help of AI - mages, audio or video files (for example deepfakes) - will need to be clearly labelled as AI generated so that users are aware when they come across such content.

Supporting Innovation

The law aims to offer start-ups and small and medium-sized enterprises opportunities to develop and train AI models before their release to the general public. That is why it requires that national authorities provide companies with a testing environment that simulates conditions close to the real world. "There has been a lot of skepticism about whether the regulations will hold back our innovation," said Tagtstrom. "So far, it looks like the regulations will keep up with the pace of innovation without holding back the good that innovation can bring.

There have also been questions about the Act’s enforcement. Details are still being worked out, but the details are coming. “It will have real teeth. Everyone is watching the space,” said Tagtstrom.

The Future for Regulations

Following its official adoption in March 2024, the AI Act will be subject to the EU Council’s formal endorsement before becoming law. The AI Act is likely to enter into force at the end of April or early May of 2024. EU officials have commented that going forward, the EU will be looking to develop more targeted AI laws after the EU elections in June 2024. They will be looking at how AI impacts employment and copyright issues.

While AI technology will change going forward, Tagtstrom believes the current regulations are a good start. “Further regulations depend on what happens with the technology. We have to stay up with where the technology is going. Everyone needs to stay on top of it,” said Tagtstrom. “We don’t know where it’s going to be in two to three years, but these regulations put a framework in place. We have something to build on for regulating and enabling.”

About the Author(s)

Rob Spiegel

Rob Spiegel serves as a senior editor for Design News. He started with Design News in 2002 as a freelancer and hired on full-time in 2011. He covers automation, manufacturing, 3D printing, robotics, AI, and more.

Prior to Design News, he worked as a senior editor for Electronic News and Ecommerce Business. He has contributed to a wide range of industrial technology publications, including Automation World, Supply Chain Management Review, and Logistics Management. He is the author of six books.

Before covering technology, Rob spent 10 years as publisher and owner of Chile Pepper Magazine, a national consumer food publication.

As well as writing for Design News, Rob also participates in IME shows, webinars, and ebooks.

Sign up for the Design News Daily newsletter.

You May Also Like