In our recent blog post, we reviewed the overall impact of the new EU AI Act on the life sciences industry. The AI Act (Regulation (EU) 2024/1689), which entered into force on 1 August 2024, is the world’s first comprehensive law regulating artificial intelligence (AI). While some provisions are already applicable, the provisions relating to high risk AI systems, which are those most likely to impact the life sciences industry, will apply from 2 August 2027.

The AI Act is horizontal legislation that applies across sectors. However, it is particularly important that developers and manufacturers of AI medical devices (AIMD) understand its implications given that AI systems used in these products are categorised as high-risk, and the AI Act imposes requirements and obligations in addition to those in the EU Medical Devices Regulations.

Given the AI Act is now in force, the Commission has started to set up some of the infrastructure required under the AI Act and to provide briefings for stakeholders on key areas of compliance. We set out a summary of recent activity below.

The AI Office and AI Board

As part of the new AI regime, the AI Act created the European AI Office within the European Commission (the AI Office). This is intended to create a centre of AI expertise in the EU and will coordinate and monitor the EU AI Act and oversee the creations of the single European AI governance system.

The AI Office’s tasks include: supporting the AI Act and enforcing general-purpose AI rules; strengthening the development and use of trustworthy AI; and, fostering international cooperation. The AI Act also gives the AI Office powers, including the ability to conduct evaluations of general-purpose AI models and request information and measures from AI model providers.

The AI Board will also be established, comprising one representative from each Member State, to ensure consistency and to coordinate implementation between national competent authorities.

It is hoped that the AI Office will be a useful resource during the implementation of the AI Act and beyond, with its commitment to collaborating with institutions, experts and stakeholders. For example, it will have a Scientific Panel of independent experts to work with the scientific community and an Advisory Forum to represent stakeholders. The Office also intends to create opportunities for cooperation between providers of AI models and systems and for the open source community “to share best practices and contribute to the development of codes of conduct and codes of practice”.

Guidance for stakeholders

In May, the AI Office held its first webinar on Risk management logic of the AI Act and related standards. The AI Office is expected to hold further webinars and the Commission has launched an AI innovation package to support startups and SMEs in developing trustworthy AI that complies with EU values and rules.

This first webinar focused on the approach to AI systems that are categorised as high risk. High-risk AI systems include those relating to a product that is required to undergo “the conformity assessment procedure with a third-party conformity assessment body”. This therefore includes AIMD that require the involvement of a Notified Body when undergoing conformity assessment.

The webinar emphasised that risk management systems and quality management systems are mandatory for high-risk AI systems:

  • Risk management: the AI Act imposes a requirement to establish, implement, document and maintain a risk management system for high-risk AI systems. This risk management system must run through the entire lifecycle of the high-risk AI system, with regular systemic review and updates. In identifying the most appropriate risk management measures, AI developers should ensure safety by design, protective measures where appropriate and safety information (including training to deployers where appropriate). Testing of the system is key in identifying the appropriate risk management measures and complying with the AI Act. This must be performed during development and before placing on the market and should be carried out against prior defined metrics and appropriate probabilistic thresholds.
  • Quality management: The AI Act obliges providers of high-risk AI systems to put in place a quality management system (QMS). The QMS must be in compliance with the AI Act and should be documented in the form of written policies, procedures and instructions. It must cover the lifecycle of the AI system, including pre-market, post market and continuously (for systems that continue to learn). The AI Act provides a list of 13 aspects that the QMS must cover. These include: strategy for regulatory compliance; design control and verification; examination, test and validation of the AI system; technical specifications; quality control, reporting of serious incidents; post-market monitoring system; data management systems and procedures; RMS; communication with authorities; document and record keeping, resource management; and accountability framework.

It is notable that many of these requirements overlap with those in the EU Medical Devices Regulations (the MDR and IVDR) and an important area of future guidance will be how these rules can be met across both regimes.

Standards

The webinar also addressed the topic of AI standards. It was emphasised that AI is a very active area of standardisation and the Commission has issued a standardisation request to support the AI Act. International standards that address various aspects of AI are also relevant but additional standards will be required to fill in gaps. Relevant and recently published AI standards include ISO/IEC 23894:2023 (guidance on risk management) and ISO/IEC 42001:2023 (AI management system).

Harmonised standards need to be tailored to the risks identified and addressed by the AI Act, be sufficiently prescriptive and clear, and aligned with the state of the art. They should be AI system- and product-oriented, apply across relevant sectors and types of AI system and cover all trustworthiness requirements. It will be important within the context of life sciences that such standards also take into account the standards applicable to medical devices, and set out how the requirements of both the MDR/IVDR and the AI Act can be met.

AI Pact

Before the AI Act is fully applicable, the Commission is promoting AI Pact, overseen by the AI Office, with the aim to enable businesses to share best practices and join activities. This is designed to promote voluntary actions to start implementing the requirements of the AI Act ahead of legal deadlines.

The AI Pact is aimed at assisting those companies impacted by the AI Act, to build a common understanding of objectives, prepare for the incoming requirements relating to high risk AI systems and build trust across the industry.

The Pact is structured around two pillars:

  • Pillar I: gathering and exchanging best practices with the AI Pact network, and
  • Pillar II: facilitating and communicating company pledges.

Pledges under Pillar II are under discussion now with relevant organisations, with a final version of the pledges to be presented at a workshop in September and signed by the end of the month. You can join the AI Pact initiative here.

Additional information

For more information on the impact of the AI Act to the life sciences industries, you can read our expert chapter in The International Comparative Legal Guide, or listen to our Practising Law Institute webinar, The EU AI Act Is Entering Into Force: What Companies Need to Know (pli.edu) (subscription required).