The end of 2023 featured two significant judgments concerning AI inventions: (i) a highly awaited decision from the Supreme Court in Thaler on the ability of AI systems to be named inventors of patents; and (ii) a decision from the High Court in Emotional Perception considering the application of the computer program exclusion in the UK, leading to prompt changes in patent examination practices by the UKIPO. The Thaler decision was unsurprising and consistent with decisions in other jurisdictions. Consequently, this article focuses on the second of these judgments, especially as Emotional Perception could have ramifications for life sciences companies utilising artificial neural networks (ANN); inventions using ANNs will no longer be excluded from patentability on the basis that it engages the computer program exclusion to patentability in the UK.Continue Reading Landmark UK High Court decision makes it easier to patent AI-related inventions that utilise ANNs

Spurred, in part, by the COVID-19 pandemic and the need for new ways to reach patients at home, 2023 saw a boom in digital technologies and healthcare solutions: one-stop-shop telemedicine platforms, app-based remote patient monitoring, direct-to-consumer online pharmacies, software-based medical devices, and artificial intelligence/machine learning to bolster delivery of telehealth services. Then came a robust government response. In the EU and UK, regulatory bodies grappled with the introduction of machine learning, AI, and other software into healthcare services by, for example, new guidance from the EU Medical Device Coordination Group and UK Medicines and Healthcare products Regulatory Agency on software medical devices, the EU’s AI Act and the UK government’s AI White paper, the European Medicines Agency reflection paper on use of AI in the product lifecycle, the EU Data Privacy Framework and the equivalent UK-U.S. data bridge, and the European Health Data Space

We call this the “Race to Regulate.” This push-pull dynamic between digital health innovation and government regulation is key to evaluating regulatory risks in today’s shifting legal landscape. This digest seeks to keep up with these changes and provide you with an overview of the key guidelines and developments as the landscape develops. As we come to the end of 2023 and publish our latest Digest, join us on December 13 as we unpack pivotal moments in the 2023 Race to Regulate and discuss what’s next for virtual and digital health. Continue Reading Virtual and Digital Health Digest and webinar

On 19 July 2023, the European Medicines Agency (EMA) published a draft Reflection paper on the use of artificial intelligence (AI) in the lifecycle of medicines (the Paper). The Paper recognises the value of this technology as part of the digital transformation within healthcare, and acknowledges its increasing use and potential to “support the acquisition, transformation, analysis, and interpretation of data within the medicinal product lifecycle”, provided of course it is “used correctly”.

The Paper reflects EMA’s early experience with and considerations on the use of AI, and gives a sense of how EMA expects applicants and holders of marketing authorisations to use AI and machine learning (ML) tools. The EMA has made clear that the use of AI should comply with existing rules on data requirements as applicable to the particular function that the AI is undertaking. It is clear that any data generated by AI/ML will be closely scrutinised by the EMA, and a risk-based approach should be taken depending on the AI functionality and the use for which the data is generated.

The Paper is open for consultation until 31 December 2023. EMA also plans to hold a workshop on 20-21 November 2023 to further discuss the draft Paper. EMA’s plan is to use the feedback from the public consultation to finalise the Paper and produce future detailed guidance. Our summary below sets out the key takeaways and the key issues that arise in the Paper.Continue Reading EMA publishes first draft of reflection paper on the use of AI in the medicinal product lifecycle

On June 14, 2023, an overwhelming majority of the European Parliament (Parliament) recently voted to pass the Artificial Intelligence Act (AI Act), marking another major step toward the legislation becoming law. As we previously reported, the AI Act regulates artificial intelligence (AI) systems according to risk level and imposes highly prescriptive requirements on systems considered to be high-risk. The AI Act has a broad extraterritorial scope, sweeping into its purview providers and deployers of AI systems regardless of whether they are established in the EU. Businesses serving the EU market and selling AI-derived products or deploying AI systems in their operations should continue preparing for compliance.

Now, the Parliament, Council, and Commission have embarked on the trilogue, a negotiation among the three bodies to arrive at a final version for ratification by the Parliament and Council. They aim for ratification before the end of 2023 with the AI Act to come into force two (or possibly three) years later.

In our recent advisory, we summarize the major changes introduced by the Parliament and guide businesses on preparing for compliance with the substantial new mandates the legislation will impose.Continue Reading European Parliament Adopts Its Version of AI Act

The MHRA is continuing to publish details on how software and AI medical devices will be regulated in the UK post Brexit, with the aim of making the UK an attractive place to launch such products. The MHRA’s recent updates to its ‘Software and AI as a Medical Device Change Programme’ (the Change Programme) intend to “deliver bold steps to provide a regulatory framework that provides a high degree of protection for patients and public, but also makes sure that the UK is recognised globally as a home of responsible innovation for medical device software looking towards a global market.

The MHRA has also recently announced it will extend the period during which EU CE marks on medical devices (including for software) will be accepted on the UK market, until July 2024.

We set out an overview of these updates below.Continue Reading Latest on software and AI devices from the MHRA

There is currently no specific legislation in the UK that governs AI, or its use in healthcare. Instead, a number of general-purpose laws apply. These laws, such as the rules on data protection and medical devices, have to be adapted to specific AI technologies and uses. They sometimes overlap, which can cause confusion for businesses trying to identify the relevant requirements that have to be met, or to reconcile potentially conflicting provisions.

As a step towards a clearer, more coherent approach, on 18 July, the UK government published a policy paper on regulating AI in the UK. The government proposes to establish a pro-innovation framework of principles for regulating AI, while leaving regulatory authorities discretion over how the principles apply in their respective sectors. The government intends the framework to be “proportionate, light-touch and forward-looking” to ensure that it can keep pace with developments in these technologies, and so that it can “support responsible innovation in AI – unleashing the full potential of new technologies, while keeping people safe and secure”. This balance is aimed at ensuring that the UK is at the forefront of such developments.

The government’s proposal is broadly in line with the MHRA’s current approach to the regulation of AI. In the MHRA’s response to the consultation on the medical devices regime in the UK post-Brexit, it announced similarly broad-brush plans for regulating AI-enabled medical devices. In particular, no definition of AI as a medical device (AIaMD) will be included in the new UK legislation, and the regime is unlikely to set out specific legal requirements beyond those being considered for software as a medical device. Instead, the MHRA intends to publish guidance that clinical performance evaluation methods should be used for assessing safety and meeting essential requirements of AIaMD, and has also published the Software and AI as a medical device change programme to provide a regulatory framework with s a high degree of protection for patients and public.Continue Reading UK Policy Paper on regulation of AI

The UK’s Medicines and Healthcare products Regulatory Authority (MHRA), the US Food and Drug Administration (FDA) and Health Canada have recently published a joint statement identifying ten guiding principles to help inform the development of Good Machine Learning Practice (GMLP).  The purpose of these principles is to “help promote safe, effective, and high quality medical devices that use artificial intelligence and machine learning (AI/ML)”.

The development and use of medical devices that use AI and ML has grown considerably over the last few years and will continue to do so. It has been recognised that such technologies have the potential to transform the way in which healthcare is deployed globally, through the analyse of vast amounts of real-world data from which software algorithms can learn and improve. However, as these technologies become more complex and nuanced in their application, this brings into question how they should be overseen and regulated. Crucially, it must be ensured that such devices are safe and beneficial to those who use them, whilst recognising associated risks and limitations.Continue Reading Ten International Guiding Principles on Good Machine Learning in Medical Devices

The use of artificial intelligence (AI) and machine learning is growing at a significant pace and  spreading across many industry sectors, including healthcare. With the rapid development of AI technology which has the potential to revolutionise many aspects of our lives, including in providing and receiving healthcare services, the concept of “creations of the mind” is no longer limited to creations by a human being. These technological developments mean that the legal framework governing intellectual property (IP) rights such as patents and copyright, which protect “creations of the mind”, may need to be adjusted to address the changes and impacts brought about by the use of AI.

In line with the UK government’s ambition for the UK to be a leader in AI and to better understand the implications AI might have for IP policy, as well as the impact IP might have for AI in the short to medium term, the UK IPO conducted a public consultation at the end of 2020. The aim of the consultation was to seek responses on a range of questions relating to AI and IP rights. The UK IPO received 92 responses from a wide range of stakeholders, including IP rights holders, producers of AI technology and academia. The government’s response to the call for views on AI and IP was published in March 2021, under which reforms to patent and copyright law and policy were discussed.

In this blog, we summarise the UK government’s conclusions from the consultation before considering the potential impact to digital health applications and companies.Continue Reading AI and IP: Implications for digital health from possible reforms to UK IP law