There is currently no specific legislation in the UK that governs AI, or its use in healthcare. Instead, a number of general-purpose laws apply. These laws, such as the rules on data protection and medical devices, have to be adapted to specific AI technologies and uses. They sometimes overlap, which can cause confusion for businesses trying to identify the relevant requirements that have to be met, or to reconcile potentially conflicting provisions.

As a step towards a clearer, more coherent approach, on 18 July, the UK government published a policy paper on regulating AI in the UK. The government proposes to establish a pro-innovation framework of principles for regulating AI, while leaving regulatory authorities discretion over how the principles apply in their respective sectors. The government intends the framework to be “proportionate, light-touch and forward-looking” to ensure that it can keep pace with developments in these technologies, and so that it can “support responsible innovation in AI – unleashing the full potential of new technologies, while keeping people safe and secure”. This balance is aimed at ensuring that the UK is at the forefront of such developments.

The government’s proposal is broadly in line with the MHRA’s current approach to the regulation of AI. In the MHRA’s response to the consultation on the medical devices regime in the UK post-Brexit, it announced similarly broad-brush plans for regulating AI-enabled medical devices. In particular, no definition of AI as a medical device (AIaMD) will be included in the new UK legislation, and the regime is unlikely to set out specific legal requirements beyond those being considered for software as a medical device. Instead, the MHRA intends to publish guidance that clinical performance evaluation methods should be used for assessing safety and meeting essential requirements of AIaMD, and has also published the Software and AI as a medical device change programme to provide a regulatory framework with s a high degree of protection for patients and public.

Continue Reading UK Policy Paper on regulation of AI

The UK’s Medicines and Healthcare products Regulatory Authority (MHRA), the US Food and Drug Administration (FDA) and Health Canada have recently published a joint statement identifying ten guiding principles to help inform the development of Good Machine Learning Practice (GMLP).  The purpose of these principles is to “help promote safe, effective, and high quality medical devices that use artificial intelligence and machine learning (AI/ML)”.

The development and use of medical devices that use AI and ML has grown considerably over the last few years and will continue to do so. It has been recognised that such technologies have the potential to transform the way in which healthcare is deployed globally, through the analyse of vast amounts of real-world data from which software algorithms can learn and improve. However, as these technologies become more complex and nuanced in their application, this brings into question how they should be overseen and regulated. Crucially, it must be ensured that such devices are safe and beneficial to those who use them, whilst recognising associated risks and limitations.

Continue Reading Ten International Guiding Principles on Good Machine Learning in Medical Devices

The use of artificial intelligence (AI) and machine learning is growing at a significant pace and  spreading across many industry sectors, including healthcare. With the rapid development of AI technology which has the potential to revolutionise many aspects of our lives, including in providing and receiving healthcare services, the concept of “creations of the mind” is no longer limited to creations by a human being. These technological developments mean that the legal framework governing intellectual property (IP) rights such as patents and copyright, which protect “creations of the mind”, may need to be adjusted to address the changes and impacts brought about by the use of AI.

In line with the UK government’s ambition for the UK to be a leader in AI and to better understand the implications AI might have for IP policy, as well as the impact IP might have for AI in the short to medium term, the UK IPO conducted a public consultation at the end of 2020. The aim of the consultation was to seek responses on a range of questions relating to AI and IP rights. The UK IPO received 92 responses from a wide range of stakeholders, including IP rights holders, producers of AI technology and academia. The government’s response to the call for views on AI and IP was published in March 2021, under which reforms to patent and copyright law and policy were discussed.

In this blog, we summarise the UK government’s conclusions from the consultation before considering the potential impact to digital health applications and companies.

Continue Reading AI and IP: Implications for digital health from possible reforms to UK IP law