This digest covers key virtual and digital health regulatory and public policy developments during February 2024.

Of note, the UK continues to pursue a “pro innovation” flexible approach to the regulation of AI. As outlined in the UK government’s response to the public consultation, the government will develop a set of core principles for regulating AI, while leaving regulatory authorities, like the Medicines and Healthcare products Regulatory Agency (MHRA), discretion over how the principles apply in their respective sectors. A central governmental function will coordinate regulation across sectors and encourage collaboration. The government’s aim with this approach is to enable the UK to remain flexible to address the changing AI landscape, while being robust enough to address key concerns. This is in sharp contrast to the position in the EU, where the EU AI Act is reaching the conclusion of the legislative process.

Regulatory Updates

UK Government Publishes Further Details on the Regulation of AI in the UK. On February 6, 2024, the UK government published its response to its consultation on regulating AI in the UK. Consistent with their initial consultation, the government proposes to establish a “pro-innovation” framework of principles for regulating AI, while leaving regulatory authorities, such as the MHRA for medicines and devices, the discretion over how the principles apply in their respective sectors. According to the government, the aim is that this approach will enable the UK to remain flexible enough to deal with the speed at which AI is developing, while being robust enough to address key concerns.

To assist in this process, the government has committed to providing regulators with funding to train and upskill their workforce to deal with AI and also to develop tools to monitor and address risks and opportunities. In addition, the government has proposed, and has already started to establish, a new central function to coordinate regulatory activity and help address regulatory gaps. The initial key roles of the central function will be to increase coherence between regulators, promote information sharing, and publish cross-sectoral guidance.

The consultation response concludes with the government setting out a roadmap of next steps in 2024 in relation to AI regulation. This includes, among those discussed above, collaborating with the AI Safety Institute to address risks of AI and sharing knowledge with international partners. Although development of a regulatory sandbox is not included in this list of next steps, the response notes that the majority of respondents stated that health care and medical devices would benefit most from an AI sandbox and it will likely be left to individual regulatory authorities to develop sector-specific sandboxes. The MHRA has already announced its intention to launch a regulatory sandbox called the “AI-Airlock” in April 2024 for software and AI medical devices. Further information can be found in our February 22, 2024 blog post.

IMDRF SaMD Working Group Opens Public Consultation on Considerations for Device and Risk Characterization for Medical Device Software. On February 2, 2024, the Software as a Medical Device (SaMD) Working Group of the International Medical Device Regulators Forum (IMDRF) published a Proposed Document titled “Medical Device Software: Considerations for Device and Risk Characterization.” The guidance will apply to the subset of software that meets the definition of a medical device, as defined by the IMDRF. The purpose of the guidance is to promote and inform clear accurate characterizations of medical device software and introduce a general strategy for characterizing software-specific risks that leverages the key features of a comprehensive medical device software characterization. The IMDRF guidance is referred to in the EU Medical Device Coordination Group guidance and in the ongoing consultation on medical devices in the UK, and so the guidance, once finalized, is likely to have implications for the EU and UK approach. The working group is inviting comments and feedback from the public until May 2, 2024.

European Parliament Informally Adopts Provisional Agreement on the AI Act. On February 13, 2024, the Members of the European Parliament voted in favor of the provisional agreement reached with the Council of the European Union (Council) on December 9, 2023 on the Artificial Intelligence Act (AI Act), discussed in our January 2024 digest. The text now needs to be formally adopted by the EU Parliament and by the Council shortly. It is expected that the AI Act will become law in early 2024 and will apply two years later, except for some provisions that will apply earlier.

European Artificial Intelligence Office Established. On February 14, 2024, the European Commission’s decision of January 24, 2024, establishing the European Artificial Intelligence Office (AIO), was published in the Official Journal of the European Union. The decision forms part of the European Commission’s package of measures to deliver on the twin objectives of promoting the uptake of AI and of addressing the risks associated with certain uses of such technology. The decision entered into force on February 21, 2024, after which the AIO began its operations. The AIO will support the development and use of trustworthy AI, while protecting against AI risks. The AIO was established within the European Commission as the center of AI expertise and forms the foundation for a single European AI governance system.

EU Council Endorses Extension to IVDR Transition Periods and Accelerated Launch of Eudamed. On February 21, 2024, the Council of the European Union endorsed the European Commission proposal to amend the Medical Device Regulations (EU) 2017/745 and the In Vitro Diagnostic Medical Device Regulations (EU) 2017/746 (IVDR), as applicable, to extend the transition provisions for certain in vitro diagnostic medical devices under the IVDR; allow for a gradual roll-out of Eudamed so that certain modules will be mandatory from late 2025; and include a notification obligation in case of interruption of supply of a critical device. The details are discussed in our previous February 2024 digest and in our February 7, 2024 blog post. The text will now need to be formally adopted by the EU Parliament and Council.

UK DHSC Announce £10 Million Funding for Innovative Medical Devices. On February 14, 2024, the UK Department of Health and Social Care announced it will provide a £10 million funding package to support eight health tech companies bringing their innovative medical devices to market. The funding forms part of the UK’s Innovative Devices Access Pathway (IDAP) pilot scheme. As discussed in our October 2023 digest, the IDAP initiative was launched to accelerate the development of innovative medical devices and to help bring those technologies to the NHS. The companies include Avegen Ltd., which is being supported in its development of a Multiple Sclerosis fatigue smartphone app that delivers exercises, cognitive behavior therapy, and targeted physical activity in a personally customizable format, and Presymptom Health Ltd., which has developed a new test and algorithm with the potential to predict infection status up to three days before conventional diagnosis is possible. These eight companies will also receive ongoing support from UK government bodies, including the MHRA, to help accelerate the process of obtaining regulatory approval.

UK Advertising Regulator Upholds Complaints Against Advertising of Two Digital Health Apps. On February 21, 2024, the UK Advertising Standards Authority (ASA) issued rulings against two digital health app developers, finding that each app was marketed as a medical device without the requisite conformity marking. The ASA is responsible for enforcing the UK Code of Non-Broadcast Advertising and Direct & Promotional Marketing (CAP Code), a self-regulatory code governing consumer advertising in the UK.

The ASA held that the advertising for the Impulse Brain Training app implied that the app could diagnose Attention Deficit Hyperactivity Disorder (ADHD). Similarly, the ASA found that the advertising for the Happyo app amounted to claims that it could diagnose and treat ADHD, as well as alleviate the symptoms of ADHD. As such, the respective claims were medical claims that presented each app as a medical device despite the apps not having the appropriate conformity marking. The ASA also held that the advertising for each app breached the CAP Code provision that advertisers must not discourage consumers from seeking essential treatment for a condition for which medical supervision should be sought.

These two rulings serve as a reminder to digital health app providers that the ASA can also enforce medical claims made with respect to such products, and this is not the exclusive jurisdiction of the UK medical devices regulator, the MHRA. Although the ASA’s powers are limited compared to that of the MHRA, it is generally a more active regulator than the MHRA and these rulings may indicate greater scrutiny of claims for health apps. The ASA may also refer a matter to the MHRA for enforcement if a company continues to make unlawful medical claims regarding an app despite a negative ASA ruling.

UK and France Announce New Funding to Further Global AI Safety. On February 29, 2024, the UK and France announced a new partnership to boost research collaboration and further global AI safety. Along with the announcement of £800,000 new funding for cutting-edge research, the UK and French ministers announced a landmark new partnership between the UK AI Safety Institute and France’s Inria (The National Institute for Research in Digital Science and Technology), to jointly support the safe and responsible development of AI technology. On the same day, the French-British joint committee on Science, Technology and Innovation met for the first time. They will continue to meet every two years to discuss a variety of opportunities for shared research and teamwork, from low-carbon hydrogen and space observation, to AI and research security.

Privacy Updates

UK ICO Reminds App Developers to Comply With Data Privacy Laws. On February 8, 2024, the UK Information Commissioner’s Office (ICO) issued a reminder to app developers to comply with data protection laws and protect the privacy of their users. The reminder follows a review conducted by the ICO in 2023 into how various period and fertility apps processed personal data and the impact of such processing on users. Although the ICO states that “no serious compliance issues or evidence of harms were identified” in the review, it is clear that there was room for improvement in app developers meeting their privacy obligations, especially for health apps where the data are particularly sensitive. The ICO provided four tips to app developers to ensure compliance:

  • Be transparent. Developers should clearly and concisely explain the purposes for processing a user’s data, the retention periods, and who the data will be shared with. This information should be easily accessible to the user.
  • Obtain valid consent. Users must provide explicit and unambiguous consent to processing of their data, with a clear action to opt-in. Default methods (e.g., a pre-ticked box) are not appropriate. Users must also be able to easily withdraw their consent.
  • Establish the correct lawful basis. Developers should carefully consider the legal basis (consent, contract, legitimate interests) under which they will process the data. The legal basis should be specific to each purpose of processing and not adopted on a blanket basis.
  • Be accountable. Developers must be accountable for complying with their obligations under relevant data protection laws.

The Department for Science, Innovation and Technology has also published a code of practice for app store operators and app developers, which builds upon some of these core principles.

UK Government Publishes Guidance on AI Assurance. On February 12, 2024, the UK government published new guidance on AI assurance to assist industry and regulators build and monitor trustworthy and responsible AI systems. It sets out a range of techniques for businesses to measure, evaluate, and communicate that their technologies are trustworthy and comply with the core principles as proposed by the UK government in its white paper in March 2023 (and as endorsed by the government response to the consultation discussed above). Businesses are encouraged to routinely assess the risks and impact of bias and data protection by employing a range of assurance tools and using global technical standards. They should also put in place various internal policies and processes such as those addressing data collection, processing and sharing, risk mitigation, key staffing responsibilities, and avenues for staff to escalate concerns. The guidance concludes with five key actions for organizations:

  • Consider existing regulations applicable to AI systems (e.g., UK GDPR).
  • Train the organization’s workforce.
  • Review internal governance and risk management.
  • Monitor publication of new regulatory guidance.
  • Participate in the development of AI standards.

EFPIA Shows Concerns Over the Negotiations of the Text of the EHDS. On February 26, 2024, the European Federation of Pharmaceutical Industries and Associations (EFPIA) expressed concerns about the ongoing negotiations regarding the text of the regulation establishing a European Health Data Space (EHDS). EFPIA had already shared concerns regarding the regulation in the preceding months (see our January 2024 digest).

It highlighted the rush shown by the European legislators (European Parliament and Council of the European Union) to finalize the regulation before the upcoming European elections taking place in June. EFPIA urged legislators to take the necessary time to finalize the regulation to ensure the quality and robustness of the legal instrument forming the basis of EHDS creation.

Among its concerns, EFPIA presented the worry shared within the European health care ecosystem that the EHDS lacks the required level of legal certainty and consistency with the existing regulatory frameworks.

EFPIA also pointed out key issues that have not been adequately addressed, including:

  • Unclear and incoherent definitions regarding the type of data and actors involved in the EHDS
  • Lack of clarification on the interaction between the EHDS and other legal frameworks
  • Failure to reduce legal fragmentation or ensure harmonization and incentivize consistent implementation
  • Absence of specifications regarding the scope of electronic health data for secondary use
  • Regarding op-in/out mechanisms, the regulation should only allow the opt-out mechanism, and solely when there is no risk of inconsistent implementation of health data disparities
  • Lack of incentives for health research and innovation
  • Failure to leverage existing health data infrastructures
  • Absence of measures to avoid excessive data localization and international health data
  • Failure to involve all relevant health stakeholders