Earlier this week, the EMA published its Reflection paper on the use of AI in the medicinal product life cycle. As set out in our previous blog post, the EMA’s draft reflection paper was published for consultation in July 2023, and following many months of review of the feedback collected from the relevant stakeholders, the Paper has now been published.  The Paper reflects the views of the EMA on the use of AI from drug discovery and non-clinical development, to clinical trials, manufacturing and post-authorisation. The Paper recognises the utility of AI in digital transformation and notes new risks that need to be mitigated to ensure patient safety and data integrity.

As set out in the consultation, the Paper encourages a risk-based approach for development, deployment, and performance monitoring of AI/ ML tools, which allows developers to pro-actively define risks. The degree of risk will depend on the AI technology and data quality, as well as the use that the technology is put to, including the degree of influence the AI technology exerts. The risk may vary throughout the product life cycle, and users are expected to consider the particular context of use and impact of the data generated to consider and manage those risks.

The Paper is not significantly different from the draft version of July 2023. We set out the key points below and highlight the following changes made in the updated version:

  • The Paper is addressed, in addition to marketing authorisation applicants and holders, to sponsors and manufacturers planning to deploy AI technology.
  • The OECD definition of AI is used.
  • “High risk” is replaced by “high patient risk” and “high regulatory impact” to avoid confusions with high-risk AI under the AI Act.
  • The glossary has been expanded (i.e., with new definitions such as on frozen models and neural networks).

The EMA has highlighted that the adoption of the AI Act was taken into consideration when finalising the Paper; see our recent blog post for a summary of the  impact of the AI Act on the Life Sciences Industry. The EMA has also stated that some comments have not been addressed in this Paper but will be in future EMA scientific guidance.

Paper’s scope

Use of AI by physicians to support clinical decisions and imaging analysis is increasingly common. EMA’s Paper covers every step in the medicines’ lifecycle and identifies the areas that fall within EMA’s or national authorities’ remit. From drug discovery to pharmacovigilance, marketing authorisation applicants and holders will need to have in place mechanisms that ensure that AI and ML are transparent, accessible, validated and monitored. EMA highlights that it is their responsibility to ensure that “all algorithms, models, datasets, and data processing pipelines used are fit for purpose and are in line with legal, ethical, technical, scientific, and regulatory standards as described in EU legislation, GxP standards and current EMA guidelines”.

The Paper covers the following stages in the medicines lifecycle:

  • Drug discovery: EMA highlights that if AI is used during this stage and the results are used as part of the body of evidence submitted for regulatory review, the principles for non-clinical development should be followed. It is, however, recognised that the risk for marketing authorisation applicants may be low considering their roles and obligations at these stages.
  • Non-clinical development: Uses that affect patient safety (such as efficacy and safety modelling that inform the design of “first-in-human” studies), that are potentially relevant for assessment of the benefit-risk balance of a medicinal product, or have high regulatory impact in another manner, should be developed and tested accordingly. EMA recommends, where applicable, applicants consider OECD guidance on GLP and advisory documents on the application of GLP principles to computerised systems and data integrity. In addition, SOPs should be updated to include AI/ML use.
  • Clinical trials: AI/ML systems used in clinical trials should meet ICH GCP guidance. If the use could be of high regulatory impact or high patient risk in a clinical trial, and the method has not been previously qualified by the EMA for the specific context of use, the AI/ML system will likely be subject to comprehensive assessment during authorisation procedures and inspection, and where necessary, related information should be included in the protocol.
  • Precision medicine: The use of AI/ML in relation to indication or posology is considered as high patient risk and high regulatory impact by the EMA. Treatment individualisation based on AI/ML in these settings must be subject to “special care”, and EMA recommends that companies provide guidance for prescribers and include fall-back treatment strategies where technical failures occur.
  • Product information: Quality review mechanisms should be in place to ensure that AI-generated text (used for drafting, compiling, editing, translating, tailoring or reviewing product information) is factually and syntactically correct.
  • Manufacturing: AI/ML used in the manufacturing of medicinal products should follow ICH quality risk management principles and GMP standards.
  • Post-authorisation phase: This stage incudes several activities such as post-authorisation studies and pharmacovigilance. AI/ML applications used for classification and gravity of adverse event reporting must be closely monitored by the marketing authorisation holder. In addition, where such tools are used for post-authorisation studies that are a condition of the marketing authorisation, they should be agreed with the regulators in advance during the assessment of the authorisation.  

Where AI/ML use is expected to impact the benefit-risk balance, even potentially, of the medicinal product, developers are advised to interact with the regulators as early as possible (i.e., through scientific advice or qualification of innovative development methods). The higher the potential regulatory impact or patient risk, the greater the scrutiny by the relevant competent authorities. This means that marketing authorisation applicants/holders will have to consider all the stages where AI/ML systems have been used and assess what is the effect on the benefit-risk balance of the medicinal product.

Technical requirements

The Paper covers the technical parameters that need to be taken into consideration when using AI/ML systems throughout the stages of the medicinal product lifecycle. These involve in particular:

  • Data acquisition and augmentation: Identification of bias and strong efforts to avoid it is of high importance for EMA. The sources of data and any processing activity should be documented in detail allowing traceability in line with GxP requirements. Analyses of exploratory data is expected to be performed to ensure any bias has been considered.
  • Training, validation and test data: EMA encourages “the practice of an early train-test split into separate and unrelated datasets, prior to any normalisation or other types of processing where aggregated measures are used” but highlights that risks of direct or indirect data leakage cannot be excluded completely. It advises that AI/ML models intended for high patient risk and/or high regulatory impact settings should be prospectively tested using newly acquired and representative data.
  • Model development: Precise guidance cannot be provided considering the different modelling and architecting approaches. EMA’s position is that sponsors, applicants or holders of marketing authorisations must ensure that robust models are applied, traceable documentation is established and maintained and that secondary assessments of development practices are conducted. If a third-party AI model or service is to be used with high regulatory impact or high patient risk, the manufacturer should provide the details covering the specific context of use through a methodology qualification process.
  • Performance assessment: The Paper highlights the importance of metrics and the related parameters for the AI/ML model assessment.
  • Interpretability and explainability: Black box models (which are less transparent and interpretable) may be allowed when transparent models are unsatisfactory and this can be substantiated. However, detailed information relating to the model’s architecture, training, validation, etc., is expected.
  • Model deployment: AI/ML should be used in line with the risk based approach. Performance should be monitored, including routine sampling of data or controls conducted from external quality programs and compliance should be regularly evaluated.

Medical devices

AI/ML systems may qualify as medical devices regulated under the EU Medical Devices or in vitro Diagnostic Regulations, and may also be High Risk AI Systems under the EU AI Act. While the qualification and classification of such systems is defined by the relevant legislation, EMA will assess AI/ML medical devices used in clinical trials that generate data to support marketing authorisation applications for medicinal products. In particular, EMA will consider whether use of the AI/ML can generate robust data to support the marketing authorisation application, and the regulatory status of the AI/ML will be considered by other relevant authorities.

Ethical aspects

Compliance with the ethical principles provided in the guidelines for trustworthy AI and presented in the Assessment List for Trustworthy Artificial Intelligence for self-assessment (ALTAI) is advised. EMA’s Paper also provides that an impact analysis of the AI/ML tools should be systematically conducted at an early stage. In addition, the impact of use should be considered when the relevant medicinal product is still in the development phase.

Integrity and data protection

The Paper recommends that security and integrity preserving measures (e.g., anonymisation of personal data) are considered and implemented in all aspects of the use of AI. For example, prior transferring large language models using actual personal data (as opposed to training data) to a less secure environment, such personal data must be anonymised and/or other measures must be taken to address the data protection and security risks posed by this less secure environment (see also our previous blog post on EMA’s guidelines on the use of Large Language Models).

The Paper also highlights that, as an overarching general rule, the processing of personal data must, at all times, comply with the data protection legislation. In this context, the Paper recommends the use of data protection impact assessments focused on the use of AI in processing personal data.