On 19 July 2023, the European Medicines Agency (EMA) published a draft Reflection paper on the use of artificial intelligence (AI) in the lifecycle of medicines (the Paper). The Paper recognises the value of this technology as part of the digital transformation within healthcare, and acknowledges its increasing use and potential to “support the acquisition, transformation, analysis, and interpretation of data within the medicinal product lifecycle”, provided of course it is “used correctly”.
The Paper reflects EMA’s early experience with and considerations on the use of AI, and gives a sense of how EMA expects applicants and holders of marketing authorisations to use AI and machine learning (ML) tools. The EMA has made clear that the use of AI should comply with existing rules on data requirements as applicable to the particular function that the AI is undertaking. It is clear that any data generated by AI/ML will be closely scrutinised by the EMA, and a risk-based approach should be taken depending on the AI functionality and the use for which the data is generated.
The Paper is open for consultation until 31 December 2023. EMA also plans to hold a workshop on 20-21 November 2023 to further discuss the draft Paper. EMA’s plan is to use the feedback from the public consultation to finalise the Paper and produce future detailed guidance. Our summary below sets out the key takeaways and the key issues that arise in the Paper.
Use of AI by physicians to support clinical decisions and imaging analysis is increasingly common. EMA’s Paper represents, however, a deeper dive and covers every step in the medicines’ lifecycle and identifies the areas which fall within EMA’s or national authorities’ remit. From drug discovery to pharmacovigilance, marketing authorisation applicants and holders will need to have in place mechanisms that ensure that AI and ML are transparent, accessible, validated and monitored. EMA highlights that it is their responsibility to ensure that “all algorithms, models, datasets, and data processing pipelines used are fit for purpose and are in line with ethical, technical, scientific, and regulatory standards as described in GxP standards and current EMA scientific guidelines”.
The Paper covers the following stages in the medicines lifecycle:
- Drug discovery: EMA highlights that if AI is used during this stage and the results are used as part of the body of evidence submitted for regulatory review, the principles for non-clinical development should be followed. It is, however, recognised that the risk for marketing authorisation applicants may be low considering their roles and obligations at these stages.
- Non-clinical development: Pre-clinical data potentially relevant to the benefit-risk balance should be analysed on the basis of a pre-specified analysis plan. EMA recommends, where applicable, applicants consider OECD guidance on GLP and advisory documents on the application of GLP principles to computerised systems and data integrity. In addition, SOPs should be updated to include AI/ML use.
- Clinical trials: ICH GCP is expected to apply to AI/ML systems used in clinical trials. Models generated for clinical purposes will be subject to comprehensive assessment during authorisation procedures and where necessary, related information should be included in the protocol.
- Precision medicine: The use of AI/ML in this area is considered as high-risk by the EMA due to the level of regulatory impact and risks for patients. Treatment individualisation based on AI/ML, in particular changes in posology, must be subject to “special care”. EMA recommends that for changes in dosing, companies should provide guidance for prescribers and include fall-back treatment strategies where technical failures occur.
- Product information: Quality review mechanisms should be in place to ensure that AI-generated text (used for drafting, compiling, translating, or reviewing product information) is factually and syntactically correct.
- Manufacturing: AI/ML used in the manufacturing of medicinal products should follow ICH quality risk management principles.
- Post-authorisation phase: This stage incudes several activities such as post-authorisation studies and pharmacovigilance. AI/ML applications used for classification and gravity of adverse event reporting must be closely monitored by the marketing authorisation holder. In addition, where such tools are used for post-authorisation studies that are a condition of the marketing authorisation, they should be agreed with the regulators in advance during the assessment of the authorisation.
Where AI/ML use is expected to impact the benefit-risk, even potentially, of the medicinal product, developers are advised to interact with the regulators as early as possible (i.e., through scientific advice or qualification of innovative development methods). The greater the potential regulatory impact or risk, the greater the scrutiny by the relevant competent authorities. This means that marketing authorisation applicants/holders will have to consider all the stages where AI/ML systems have been used and assess what is the effect on the benefit-risk balance of the medicinal product.
Taking as an example the use of AI/ML tools to select participants to clinical trials or analyse the data obtained during the trials: while it is the responsibility of the sponsor of the clinical trial to ensure, for example, that the use of AI/ML systems comply with statistical principles for clinical trials, ethical rules, or technical aspects, it is for the marketing authorisation applicant to “convince” EMA or the national authorities that data obtained through the use of AI/ML is trustworthy, reliable and robust. If the applicant does not have a good understanding of the related processes or functioning of the AI/ML model, there is a risk that the applicant will not be able to appropriately support its application for a marketing authorisation.
The Paper covers the technical parameters that need to be taken into consideration when using AI/ML systems throughout the stages of the medicinal product lifecycle. These involve in particular:
- Data acquisition and augmentation: Identification of human bias and strong efforts to avoid it is of high importance for EMA. The sources of data and any processing activity should be documented in detail allowing traceability in line with GxP requirements. Analyses of exploratory data is expected to be performed to ensure any bias has been considered.
- Training, validation and test data: EMA encourages “the practice of an early train-test split, prior to any normalisation or other types of processing where aggregated measures are used” but highlights that risks of data leakage cannot be excluded completely. It advises that AI/ML model should be prospectively tested using newly acquired data.
- Model development: Precise guidance cannot be provided considering the different modelling and architecting approaches. EMA’s position is that applicants for marketing authorisation must ensure that robust models are applied, traceable documentation is established and maintained and that secondary assessments of development practices are conducted.
- Performance assessment: the Paper highlights the importance of metrics and the related parameters for the AI/ML model assessment.
- Interpretability and explainability: Black box models (which are less transparent and interpretable) may be allowed when transparent models are unsatisfactory and this can be substantiated. However, detailed information relating to the model’s architecture, training, validation, etc., is expected.
- Model deployment: AI/ML should be used in line with the risk based approach. Performance should be monitored, including routine sampling of data or controls conducted from external quality programs.
AI/ML systems may qualify as medical devices regulated under the EU Medical Devices Regulations. While the qualification and classification of such systems is defined by the medical devices legislation, EMA will assess AI/ML medical devices used in clinical trials and that generate data to support marketing authorisation applications for medicinal products. In particular, EMA will consider whether use of the AI/ML can generate robust data to support the marketing authorisation application, and the regulatory status of the AI/ML will be considered by other relevant authorities.
In parallel, it is worth highlighting that the AI/ML systems will also be subject to the EU AI Act, once adopted. See our recent advisory for a summary of the current status of the Act.
Ethical aspects and data protection
Marketing authorisation applicants and holders are advised to comply with the ethical principles provided in the guidelines for trustworthy AI and presented in Assessment List for Trustworthy Artificial Intelligence for self-assessment (ALTAI). EMA also provides that an impact analysis should be systematically conducted at an early stage.
In addition, collection and processing of personal data must take place in accordance with the applicable legislation on data protection. The Paper recommends the use of risk assessments focused on AI.
Related guidance documents
EMA highlights that many recommendations and best practices also apply to the field of AI/ML and provides a useful list of relevant guidance at the end of the Paper.