On 20 March, the Commission proposed several measures intending to boost technology and biomanufacturing in the EU, including the Commission Communication Building the future with nature: Boosting Biotechnology and Biomanufacturing in the EU; and Questions & Answers on the Commission Communication.

The Communication, while directed towards the broader biotech sector, refers to the aim to have a resilient EU biotech ecosystem to safeguard the supply of innovative and generic medicines. It recognizes the significance of vaccine and mRNA technology research for cancer, cardiovascular infectious, and rare diseases, as well as the role of AI in personalized healthcare and diagnostics, including generative AI for medicines discovery and complex genetic diseases.

At the same time, it acknowledges the challenges found in the biotech sector in the EU due to complex regulatory obstacles both nationally and at EU level; or the uncertainty over the return of their investments. Continue Reading Commission Communication on biotechnology and biomanufacturing

This digest covers key virtual and digital health regulatory and public policy developments during February 2024.

Of note, the UK continues to pursue a “pro innovation” flexible approach to the regulation of AI. As outlined in the UK government’s response to the public consultation, the government will develop a set of core principles for regulating AI, while leaving regulatory authorities, like the Medicines and Healthcare products Regulatory Agency (MHRA), discretion over how the principles apply in their respective sectors. A central governmental function will coordinate regulation across sectors and encourage collaboration. The government’s aim with this approach is to enable the UK to remain flexible to address the changing AI landscape, while being robust enough to address key concerns. This is in sharp contrast to the position in the EU, where the EU AI Act is reaching the conclusion of the legislative process.Continue Reading Virtual and Digital Health Digest, March 2024

A version of this article was first published in Life Sciences IP Review

There is currently no specific legislation in the UK that governs AI, or its use in healthcare. Instead, a number of general-purpose laws apply that have to be adapted to specific AI technologies. As a step towards a more coherent approach, the government recently published its response to its consultation on regulating AI in the UK.  This maintains the government’s “pro-innovation” framework of principles, to be set out in guidance rather than legislation, which will then be implemented by regulatory authorities in their respective sectors, such as by the MHRA for medicines.  The MHRA has already started this process and signalled itself as an early-adopter of the UK government’s approach. The hope is that this will lead to investment in the UK by life science companies as the UK is seen as a first-launch country for innovative technologies.Continue Reading The UK’s pro-innovation approach to AI: What does this mean for life science companies?

The end of 2023 featured two significant judgments concerning AI inventions: (i) a highly awaited decision from the Supreme Court in Thaler on the ability of AI systems to be named inventors of patents; and (ii) a decision from the High Court in Emotional Perception considering the application of the computer program exclusion in the UK, leading to prompt changes in patent examination practices by the UKIPO. The Thaler decision was unsurprising and consistent with decisions in other jurisdictions. Consequently, this article focuses on the second of these judgments, especially as Emotional Perception could have ramifications for life sciences companies utilising artificial neural networks (ANN); inventions using ANNs will no longer be excluded from patentability on the basis that it engages the computer program exclusion to patentability in the UK.Continue Reading Landmark UK High Court decision makes it easier to patent AI-related inventions that utilise ANNs

On 19 July 2023, the European Medicines Agency (EMA) published a draft Reflection paper on the use of artificial intelligence (AI) in the lifecycle of medicines (the Paper). The Paper recognises the value of this technology as part of the digital transformation within healthcare, and acknowledges its increasing use and potential to “support the acquisition, transformation, analysis, and interpretation of data within the medicinal product lifecycle”, provided of course it is “used correctly”.

The Paper reflects EMA’s early experience with and considerations on the use of AI, and gives a sense of how EMA expects applicants and holders of marketing authorisations to use AI and machine learning (ML) tools. The EMA has made clear that the use of AI should comply with existing rules on data requirements as applicable to the particular function that the AI is undertaking. It is clear that any data generated by AI/ML will be closely scrutinised by the EMA, and a risk-based approach should be taken depending on the AI functionality and the use for which the data is generated.

The Paper is open for consultation until 31 December 2023. EMA also plans to hold a workshop on 20-21 November 2023 to further discuss the draft Paper. EMA’s plan is to use the feedback from the public consultation to finalise the Paper and produce future detailed guidance. Our summary below sets out the key takeaways and the key issues that arise in the Paper.Continue Reading EMA publishes first draft of reflection paper on the use of AI in the medicinal product lifecycle

On June 14, 2023, an overwhelming majority of the European Parliament (Parliament) recently voted to pass the Artificial Intelligence Act (AI Act), marking another major step toward the legislation becoming law. As we previously reported, the AI Act regulates artificial intelligence (AI) systems according to risk level and imposes highly prescriptive requirements on systems considered to be high-risk. The AI Act has a broad extraterritorial scope, sweeping into its purview providers and deployers of AI systems regardless of whether they are established in the EU. Businesses serving the EU market and selling AI-derived products or deploying AI systems in their operations should continue preparing for compliance.

Now, the Parliament, Council, and Commission have embarked on the trilogue, a negotiation among the three bodies to arrive at a final version for ratification by the Parliament and Council. They aim for ratification before the end of 2023 with the AI Act to come into force two (or possibly three) years later.

In our recent advisory, we summarize the major changes introduced by the Parliament and guide businesses on preparing for compliance with the substantial new mandates the legislation will impose.Continue Reading European Parliament Adopts Its Version of AI Act

The MHRA is continuing to publish details on how software and AI medical devices will be regulated in the UK post Brexit, with the aim of making the UK an attractive place to launch such products. The MHRA’s recent updates to its ‘Software and AI as a Medical Device Change Programme’ (the Change Programme) intend to “deliver bold steps to provide a regulatory framework that provides a high degree of protection for patients and public, but also makes sure that the UK is recognised globally as a home of responsible innovation for medical device software looking towards a global market.

The MHRA has also recently announced it will extend the period during which EU CE marks on medical devices (including for software) will be accepted on the UK market, until July 2024.

We set out an overview of these updates below.Continue Reading Latest on software and AI devices from the MHRA

There is currently no specific legislation in the UK that governs AI, or its use in healthcare. Instead, a number of general-purpose laws apply. These laws, such as the rules on data protection and medical devices, have to be adapted to specific AI technologies and uses. They sometimes overlap, which can cause confusion for businesses trying to identify the relevant requirements that have to be met, or to reconcile potentially conflicting provisions.

As a step towards a clearer, more coherent approach, on 18 July, the UK government published a policy paper on regulating AI in the UK. The government proposes to establish a pro-innovation framework of principles for regulating AI, while leaving regulatory authorities discretion over how the principles apply in their respective sectors. The government intends the framework to be “proportionate, light-touch and forward-looking” to ensure that it can keep pace with developments in these technologies, and so that it can “support responsible innovation in AI – unleashing the full potential of new technologies, while keeping people safe and secure”. This balance is aimed at ensuring that the UK is at the forefront of such developments.

The government’s proposal is broadly in line with the MHRA’s current approach to the regulation of AI. In the MHRA’s response to the consultation on the medical devices regime in the UK post-Brexit, it announced similarly broad-brush plans for regulating AI-enabled medical devices. In particular, no definition of AI as a medical device (AIaMD) will be included in the new UK legislation, and the regime is unlikely to set out specific legal requirements beyond those being considered for software as a medical device. Instead, the MHRA intends to publish guidance that clinical performance evaluation methods should be used for assessing safety and meeting essential requirements of AIaMD, and has also published the Software and AI as a medical device change programme to provide a regulatory framework with s a high degree of protection for patients and public.Continue Reading UK Policy Paper on regulation of AI

The UK’s Medicines and Healthcare products Regulatory Authority (MHRA), the US Food and Drug Administration (FDA) and Health Canada have recently published a joint statement identifying ten guiding principles to help inform the development of Good Machine Learning Practice (GMLP).  The purpose of these principles is to “help promote safe, effective, and high quality medical devices that use artificial intelligence and machine learning (AI/ML)”.

The development and use of medical devices that use AI and ML has grown considerably over the last few years and will continue to do so. It has been recognised that such technologies have the potential to transform the way in which healthcare is deployed globally, through the analyse of vast amounts of real-world data from which software algorithms can learn and improve. However, as these technologies become more complex and nuanced in their application, this brings into question how they should be overseen and regulated. Crucially, it must be ensured that such devices are safe and beneficial to those who use them, whilst recognising associated risks and limitations.Continue Reading Ten International Guiding Principles on Good Machine Learning in Medical Devices