What to Expect from the EU's Artificial Intelligence Act in 2025What to Expect from the EU's Artificial Intelligence Act in 2025

Anne-Gabrielle Haie, a partner with Steptoe LLP, shares what companies can expect from the landmark Artificial Intelligence Act going into 2025.

Omar Ford

December 10, 2024

11 Min Read
Image courtesy of Steptoe LLP

Now that the European Parliament and European Commission have formally adopted the AI Act – industries are asking what’s next?

Anne-Gabrielle Haie, a partner with Steptoe LLP, speaks to MD+DI about next steps and what medtech and other industries can expect from the act in 2025.  

Thanks for taking the time to speak with us Anne-Gabrielle Haie. It’s been a few months since the EU/AI Act was passed. Before we jump into the meat of our conversation let’s answer this question – how does the Act define AI? What is considered AI under the act?

Haie: The EU AI Act defines an AI system as“a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

Several key aspects emerge from this definition. Firstly, autonomy is a defining characteristic, meaning an AI system must have the capability to operate independently of human intervention. This autonomy allows the system to perform tasks and make decisions without continuous human oversight. Secondly, the AI system must be capable of adaptiveness after deployment. AI systems are expected to demonstrate self-learning capabilities, enabling them to adapt and improve their performance based on new data and experiences while in use. This adaptiveness is crucial for the system's ongoing effectiveness and relevance.

Related:AI Tackles Greater Frontiers In 2025

Another fundamental aspect is the capacity to infer and generate outputs. AI systems must be able to process inputs and produce outputs that can influence both physical and virtual environments. This inferencing capability involves deriving models or algorithms from inputs, which goes beyond basic data processing. Techniques such as machine learning and logic- and knowledge-based approaches are often used to achieve this capability.

This definition is broad and largely inspired by the OECD’s definition of AI systems. It aims to be technology-neutral and innovation-proof, distinguishing AI systems from simpler traditional software systems. It explicitly excludes systems that operate solely based on rules defined by humans for automatic execution.

Although AI models are essential components of AI systems, they are not considered AI systems on their own. They require additional components, such as user interfaces, to become fully functioning AI systems. GPAI models typically exhibit flexibility in generating content (e.g., text, audio, images, or video) and accommodating a wide array of tasks.

Related:Survey Says AI Is Already Key Part of Product Design

The AI Act does foresee certain exceptions, where some AI systems or GPAI models that meet the aforementioned definitions are not subject to the Act's obligations. For instance, AI systems or models specifically developed and deployed solely for scientific research and development purposes are exempt from this regulation.

Can you set some background on where we are with enforcements and deadlines pertaining to the act as we go into 2025?

Haie: As of now, the AI Act has not yet entered into application, and thus, enforcement has not started. However, this is set to change soon, with several key provisions scheduled to come into effect starting Feb. 2, 2025.

To provide a clear timeline, here are the key deadlines associated with the AI Act:

  • Feb. 1, 2025: This date marks the entry into application of the provisions related to prohibited AI practices. Additionally, general provisions of the AI Act, which include the subject matter, scope, definitions, and obligations related to AI literacy. will also come into effect.

  • Aug. 1, 2025: Obligations specific to General-Purpose AI (GPAI) models will begin to apply. This date also marks the start of the application of provisions related to governance and penalties for non-compliance with the EU AI Act, with the exception of provisions related to fines for providers of GPAI models .

  • Aug.1, 2026: Obligations applicable to high-risk AI systems, as referred to in Annex III, will come into effect. Additionally, AI systems that are subject to specific transparency obligations will also need to comply starting from this date. Measures in support of innovation, such as AI regulatory sandboxes, testing in real-world conditions, and measures for SMEs and startups, will also be enforced. Furthermore, provisions related to fines for providers of GPAI models will begin to apply.

  • Aug. 1, 2027: The obligations applicable to high-risk AI systems intended to be used as a safety component of a product, or which are products themselves, covered by EU legislations listed under Annex I and subject to a third-party conformity assessment procedure, will enter into application.

Related:AI's Challenges to Power and Signal Engineering Take Center Stage at DesignCon

It is important to note that AI systems and GPAI models already placed on the market or put into service will be granted extended time for compliance with this regulation.

In summary, while most of the provisions of the EU AI Act will become applicable on Aug. 1, 2026, certain provisions will apply either earlier or later than this date. The timeframe for the application of the different provisions of the AI Act varies depending on factors such as the classification of an AI system (e.g., prohibited, high-risk), the date when an AI system or GPAI model is placed on the market or put into service in the EU, and the purpose for which the AI system is used.

I’m wondering how numerous industries are reacting to the act. In other words , do we see the lifesciences (medical device) industry more prepared than say the automobile or aviation industry? Are some still oblivious or are some not taking the act seriously?

Haie: There is noticeable anxiety across various industries due to the legal uncertainty stemming from the numerous unclear concepts within the AI Act. Organizations are struggling to determine whether they fall within the scope of the Act and to understand their subsequent obligations. The onerous obligations set by the AI Act create a sense of being overwhelmed, as compliance presents several challenges. The broad scope and detailed provisions of the Act can be daunting, particularly for smaller players across all industries, who may find it difficult to allocate the necessary resources to meet the requirements. Additionally, companies without direct regulatory obligations or those operating in non-EU markets might underestimate the Act’s extraterritorial reach and its implications for global supply chains.

The life sciences industry appears to be more prepared for the AI Act compared to other sectors. This preparedness is largely due to the industry’s extensive experience with strict regulatory frameworks and its long-standing use of AI. Companies in this sector are accustomed to navigating complex regulations, which provides them with a significant advantage in adapting to the new requirements. In contrast, industries that are less dependent on AI or have fewer high-risk applications, such as retail or traditional manufacturing, may be slower to act. This is especially true for smaller firms or sectors with limited AI adoption, which may still be oblivious to the Act’s implications.

However, despite their preparedness, the life sciences industry will encounter specific challenges. While many AI systems in the life sciences sector are likely to be classified as high-risk under the Act, relevant concepts related to high-risk AI systems lack sufficient clarity, and it is not fully clear how concerned life sciences companies are expected to manage regulatory overlaps between product legislation (e.g., Medical Device Regulation, In Vitro Diagnostic Medical Devices Regulation) and the obligations provided by the AI Act.

We’ve talked about how the act will impact industry and commercialization – but what kind of impact will this have on AI research and academic projects? Will there be any at all?

Haie: The AI Act is poised to significantly influence AI research and academic projects within the EU. On the positive side, it promotes ethical AI development, guiding researchers towards creating responsible and beneficial AI systems. The Act encourages collaboration between industry, academia, and policymakers, fostering knowledge sharing and innovation. Additionally, the EU's announced increased investment in AI research, potentially spurred by the AI Act, can drive innovation and open new research opportunities.

However, the AI Act also presents potential challenges. Compliance requirements may burden researchers, particularly those working on smaller-scale projects. The Act’s focus on risk mitigation could discourage experimentation and risk-taking, which are essential for groundbreaking research. Furthermore, resource constraints at smaller academic institutions or research labs may limit their ability to meet the same standards as industry, affecting their competitiveness.

That being said, it is important to note that the Act explicitly exempts AI systems developed solely for scientific research and development. Academic projects are largely excluded from compliance obligations until those systems are placed on the market or tested in real-world conditions.

In summary, while the AI Act’s direct impact on academic research is limited, its broader effects on the AI ecosystem will likely shape research priorities and methods. Researchers will need to engage with the evolving regulatory landscape to ensure their work remains relevant and compliant as it transitions toward real-world applications. The long-term effects will depend on how the Act is implemented and interpreted.

We can’t talk about AI without discussing AI bias and discrimination. How does the act account for this?

Haie: The EU AI Act places significant emphasis on addressing AI bias and discrimination, recognizing that these issues pose serious risks to fundamental rights. To mitigate these risks, the Act establishes robust requirements for the development, deployment, and monitoring of AI systems, particularly those classified as high-risk.

To prevent harmful AI practices, the Act explicitly prohibits certain AI practices, including those that manipulate individuals through subliminal techniques or exploit vulnerabilities based on age, gender, or other protected characteristics. These provisions aim to prevent discriminatory uses of AI that could harm individuals or groups.

The AI Act tackles AI bias and discrimination through stringent requirements for high-risk AI systems. These systems are required to use datasets that are representative, free of errors, and complete, which is critical for minimizing biases in AI models, especially in sensitive areas like recruitment, credit scoring, healthcare, and law enforcement. Developers must assess and mitigate risks of discriminatory outcomes during the design and testing phases, implementing strategies to prevent indirect discrimination arising from imbalanced or incomplete training data. Additionally, the AI Act mandates transparency and accountability, ensuring that users of high-risk AI systems are informed about the system's functioning and decision-making processes, making it easier to identify and address potential biases. The Act emphasizes human oversight to counteract discriminatory AI decisions. Furthermore, the AI Act also requires certain deployers of high-risk AI systems to conduct a fundamental rights impact assessment. This involves analyzing potential adverse effects, including biases or discriminatory impacts, and documenting measures taken to mitigate these risks. This assessment ensures that AI systems are designed to respect human rights and promote fairness. Finally, the AI Act imposes ongoing monitoring and post-market obligations on providers and deployers to continually monitor deployed AI systems for discriminatory outcomes and take corrective actions to address them.

The Act also applies to General-Purpose AI models, ensuring that these models do not propagate biases, especially when integrated into downstream applications that significantly impact fundamental rights.

In conclusion, the EU AI Act provides a comprehensive framework to combat AI bias and discrimination, emphasizing high-quality data, transparency, accountability, and ongoing monitoring.

We can’t talk about AI without discussing AI bias and discrimination. How does the act account for this?

Haie: The EU AI Act places significant emphasis on addressing AI bias and discrimination, recognizing that these issues pose serious risks to fundamental rights. To mitigate these risks, the Act establishes robust requirements for the development, deployment, and monitoring of AI systems, particularly those classified as high-risk.

To prevent harmful AI practices, the Act explicitly prohibits certain AI practices, including those that manipulate individuals through subliminal techniques or exploit vulnerabilities based on age, gender, or other protected characteristics. These provisions aim to prevent discriminatory uses of AI that could harm individuals or groups.

The AI Act tackles AI bias and discrimination through stringent requirements for high-risk AI systems. These systems are required to use datasets that are representative, free of errors, and complete, which is critical for minimizing biases in AI models, especially in sensitive areas like recruitment, credit scoring, healthcare, and law enforcement. Developers must assess and mitigate risks of discriminatory outcomes during the design and testing phases, implementing strategies to prevent indirect discrimination arising from imbalanced or incomplete training data. Additionally, the AI Act mandates transparency and accountability, ensuring that users of high-risk AI systems are informed about the system's functioning and decision-making processes, making it easier to identify and address potential biases.

The Act emphasizes human oversight to counteract discriminatory AI decisions. Furthermore, the AI Act also requires certain deployers of high-risk AI systems to conduct a fundamental rights impact assessment. This involves analyzing potential adverse effects, including biases or discriminatory impacts, and documenting measures taken to mitigate these risks. This assessment ensures that AI systems are designed to respect human rights and promote fairness.

Finally, the AI Act imposes ongoing monitoring and post-market obligations on providers and deployers to continually monitor deployed AI systems for discriminatory outcomes and take corrective actions to address them.

The Act also applies to General-Purpose AI models, ensuring that these models do not propagate biases, especially when integrated into downstream applications that significantly impact fundamental rights.

In conclusion, the EU AI Act provides a comprehensive framework to combat AI bias and discrimination, emphasizing high-quality data, transparency, accountability, and ongoing monitoring.

About the Author

Omar Ford

Omar Ford is a veteran reporter in the field of medical technology and healthcare journalism. As Editor-in-Chief of MD+DI (Medical Device and Diagnostics Industry), a leading publication in the industry, Ford has established himself as an authoritative voice and a trusted source of information.

Ford, who has a bachelor's degree in print journalism from the University of South Carolina, has dedicated his career to reporting on the latest advancements and trends in the medical device and diagnostic sector.

During his tenure at MD+DI, Ford has covered a wide range of topics, including emerging medical technologies, regulatory developments, market trends, and the rise of artificial intelligence. He has interviewed influential leaders and key opinion leaders in the field, providing readers with valuable perspectives and expert analysis.

 

Sign up for Design News newsletters

You May Also Like