This digest covers key virtual and digital health regulatory and public policy developments during March.

You will note that the EU institutions have been busy during March. On March 12, 2024, the European Parliament (EP) formally adopted the revised Product Liability Directive, which makes several important changes to the existing European Union (EU) product liability regime, including that software and artificial intelligence (AI) technologies will now fall within the scope of a product. On March 13, 2024, the EP formally adopted the Artificial Intelligence Act, meaning the legislative process for the world’s first binding law on AI is nearing its conclusion. Finally, on March 15, 2024, the Council of the European Union and the EP reached a provisional agreement on the European Health Data Space (EHDS), which aims to improve access to health data electronically across the EU. Each of these important legislative provisions should shortly be finalized and will then become law in the EU.

Regulatory Updates

European Parliament Adopts the AI Act. On March 13, 2024, the Members of the European Parliament formally adopted the Artificial Intelligence Act (AI Act). Following a lengthy negotiation period since the initial proposal by the European Commission in April 2021, the legislative process for the world’s first binding law on AI is nearing its conclusion. For further details on the negotiations surrounding the text of the AI Act, see our January 2023 Advisory.

There are several provisions of the AI Act that are worth mentioning; in particular:

  • Harmonized rules for placing on the market, putting into service, and using AI systems in the EU, such as producing codes of practice for low-risk AI-systems; requiring the placing on the market of high-risk systems (which will include medical devices) subject to the presentation of technical documentation proving compliance with requirements; and mandatory registration in the EU database prior to the use of certain AI systems
  • Prohibitions of certain AI practices considered to threaten citizens’ rights, such as biometric categorization systems based on sensitive characteristics, with exceptions for AI systems intended strictly for medical purposes
  • Specific requirements for high-risk AI systems and their operators, such as classifying AI systems based on risk levels in accordance with their potential risks and level of impact and imposing strict requirements on high-risk AI systems, including medical devices (although there are provisions that acknowledge parallel requirements under the EU Medical Devices Regulation)
  • Harmonized transparency rules for certain AI systems, such as specifying the minimum information that the instructions of use accompanying high-risk AI systems must contain; requiring disclosure when content has been artificially generated or manipulated; marking the outputs of AI systems in a machine-readable format and detectable as artificially generated or manipulated; and complying with EU copyright law
  • Rules on market monitoring, market surveillance governance, and enforcement, for instance, obligations to report serious incidents to the authorities
  • Measures to foster innovation, such as establishing regulatory sandboxes and real-world testing to facilitate the development and testing of innovative AI before placing on the market

The legislation extends its jurisdiction broadly, applying to AI systems operating within the EU and to AI systems outside the EU whose output is introduced into the EU market. Therefore, companies at every stage of the AI development process must carefully monitor the AI Act’s provisions, irrespective of where they are located geographically. The AI Act now awaits formal adoption by the Council of the EU, expected to take place in April, before it can become law.

DHSC and MHRA Accept Recommendations to Tackle Biases in Medical Devices. On March 11, 2024, the UK government’s Department of Health and Social Care (DHSC) and Medicines and Healthcare products Regulatory Agency (MHRA) set out their planned measures to address racial, ethnic, and other biases in the design and use of medical devices. The measures are in response to an independent report on “equity in medical devices,” which was commissioned by DHSC over concerns that pulse oximeters were not as accurate for patients with darker skin tones. The report made 18 recommendations in order to tackle potential bias, which the DHSC and MHRA have fully accepted. The MHRA will now request that applicants describe how bias will be addressed in applications for approvals of medical devices and will publish strengthened guidance for developers on how to improve diversity in the development and testing stages. The DHSC will also support work to remove racial bias in data used in clinical studies and improve the transparency of data used in the development of medical devices using AI.

Second Reading of Private Members’ Bill in the UK’s House of Lords on the Topic of AI Regulation. On March 22, 2024, a Private Members’ bill, called the Artificial Intelligence (Regulation) Bill, had its second reading in the House of Lords. The bill was first introduced in the House of Lords on November 23, 2023. The main purpose of the bill is to establish a central AI authority to coordinate and monitor the regulatory approach to AI, while promoting transparency, reducing bias, and balancing regulatory burden against risk. Although the bill largely tracks the government’s white paper setting out the government’s pro-innovation approach to the regulation of AI, it seeks to introduce the provisions into law. In contrast, and as set out in previous digests, it is currently not the government’s intention to introduce AI-specific legislation, but it instead intends to develop a set of core principles for regulating AI while leaving regulatory authorities, like the MHRA, discretion over how the principles apply in their respective sectors. A briefing report, published on March 18, 2024, describes the bill in more detail and a full debate was held on March 22, 2024. The bill will now move to the committee stage in the House of Lords where the bill will be scrutinized line by line. It will then proceed through a number of additional stages within the House of Lords, prior to repeating the process within the House of Commons. Only a minority of Private Members’ bills become legislation, and it is unlikely that this will become law given the government’s position. Even so, it is clear there is a growing debate in the UK about whether the government’s proposed approach to AI is correct.

Privacy Updates

Council and European Parliament Reach Provisional Agreement on the EHDS. On March 15, 2024, the Council of the EU and the EP reached a provisional agreement on the regulation creating an EHDS. The regulation aims to improve access to health data electronically across the EU. It is important to note that the text of the agreement has not yet been published and that this summary is solely based on press releases from March 15, 2024 and March 22, 2024. More details on the provisional agreement are included in our March 2024 blog. The agreement now needs to be formally adopted by the Council of the EU and the EP before it can become law.

Key elements of the agreement that are worth mentioning include:

  • Broad definition of health data, including health records, clinical trial data, health claims and reimbursement information; pathogen genetic and other human molecular data; or aggregated data on health care resources, expenditure, and financing
  • Limits to access to health data:
    • Permission is necessary prior to accessing data and is granted by a health data access body.
    • Patients have the right to object to secondary use of their data, subject to certain conditions (i.e., an opt-out mechanism), except when requested by a public body for public interest purposes; Member States may introduce further measures for certain data (e.g., genomic data).
    • Measures are in place in case of non-compliance, such as revoking data permits, excluding access to the EHDS for up to five years, or imposing periodic penalty payments.
    • Patients will be informed every time their data is accessed and information about the data applicant will be made public for purposes of accessing data and expected benefit, safeguards, and justified estimated processing period.
  • Limits to sharing of health data:
    • Data can only be shared in an anonymized or pseudonymized format to third parties mentioned in the data permit and only for public interest purposes (such as research and innovation).
    • Data is not permitted to be shared for advertising or assessing insurance requests.
    • Member States may introduce stricter measures regarding access to specific types of sensitive data (such as genetic, epigenomic and genomic data, and human molecular data).
  • The secondary use of electronic health data covered by IP and regulatory data protection rights, as well as trade secrets, is possible if it follows principles outlined in the regulation (for instance, informing the health data access body and justifying what exactly needs protection).
  • Health data transfers to third countries must comply with General Data Protection Regulation requirements and additional measures will be specified in a Delegated Act; data must be stored in the EU or in a country subject to a data protection adequacy decision by the European Commission.
  • A stakeholder forum will provide input on the EHDS and facilitate cooperation to ensure implementation.

DARWIN EU Calls for New Data Partners To Add to Its Network. On March 6, 2024, the European Medicines Agency (EMA) announced that the Data Analysis and Real World Interrogation Network (DARWIN EU) will expand and is looking for 10 new data partners to add in 2024. DARWIN EU is a coordination center created by the EMA and the European Medicines Regulatory Network to provide timely and reliable real-world evidence (RWE) on the use, safety, and effectiveness of medicines for human use from real world health care databases across the EU. The results of the studies carried out are made public in the new Heads of Medicines Agency-European Medicines Agency Catalogue of RWD studies. DARWIN EU obtains anonymized patient data from data partners, who generate RWE from sources such as hospitals, primary care, health insurance, registries, and biobanks to support regulatory activities of EMA’s scientific committees and national regulators in the EU. At the moment, DARWIN EU’s data partners include 20 public or private institutions from 13 European countries. The call to become a data partner is open to any data custodian in Europe and is a continuously open call until the end of 2024. However, applications will be reviewed and selected twice a year; applications received on or before April 30 and October 31, 2024 will be considered.

Reimbursement Updates

Flash Glucose Monitoring Systems Can Now Be Prescribed and Reimbursed in an Italian Region. The Italian Lombardy Region has adopted Resolution No. XII/1827 introducing a new regional eligibility criteria allowing Flash Glucose Monitoring Systems to be prescribed and reimbursed. The Flash Glucose Monitoring Systems are recommended in the resolution for patients with decompensated and non-decompensated type 1 diabetes mellitus and subjects with non-decompensated type 1 diabetes mellitus; patients with type 2 diabetes mellitus on basal insulin therapy and, for a limited period of three months, patients with type 2 diabetes receiving oral hypoglycemic therapy. It is the first time that glucose monitoring devices are reimbursed in Europe for type 2 diabetic patients on oral hypoglycemic therapy. Flash Glucose Monitoring Systems allow diabetic patients to self-monitor their health status.

Product Liability Updates

Revised EU Product Liability Directive One Step Closer To Becoming Law. On March 12, 2024, the EP formally adopted the revised Product Liability Directive (PLD). The Council of the EU is now expected to do the same without further amendments, after which the PLD will be published and will enter into force. This follows the provisional trialogue agreement reached on December 14, 2023 between the European Commission, EP, and Council (discussed in our January 2024 digest). The revised PLD makes several important changes to the existing EU product liability regime. For example, software and AI technologies will now fall within the scope of a product, there will be disclosure obligations on manufacturers, and the burden of proof for claimants has been alleviated through the introduction of several rebuttable presumptions. The EU Member States will have 24 months to transpose the measures into national law from the date the revised PLD enters into force.

IP Updates

Getty Images v. Stability AI: Getty Files Its Reply to Stability AI’s Defense. As we reported in our January 2023 digest, there is a significant ongoing case between Getty Images and Stability AI in the UK High Court testing the boundaries of copyright infringement. In particular, whether the use of a dataset obtained by the scraping of millions of images from websites owned by Getty Images (which also included images from a number of other well-known sources, including Pinterest, Flickr, Wikimedia, and Tumblr) to train Stability AI’s text-to-image generator, Stable Diffusion, amounts to copyright infringement in the UK. Getty Images also claims that the output produced by Stable Diffusion reproduces a substantial part of Getty Images’ copyrighted works, amounting to a separate act of copyright infringement. There are also claims of database right infringement and trademark infringement, which we do not discuss further in this update.

In the latest pleading filed by Getty Images on March 28, 2024, Getty Images disputes Stability AI’s allegation that there is a separate database and, instead, it asserts that the calculations described in Stability AI’s defense are part of the AI model itself. Getty Images maintains that, to generate synthetic images, an AI model must learn from images contained within the datasets on which it is trained. In relation to the output generated by Stability AI, Getty Images argues that Stability AI has a high degree of control over the features of Stable Diffusion and that it has failed to design the model so as to prevent it from generating synthetic image outputs which comprise a reproduction of a substantial part of the input image and which therefore infringe Getty Images’ copyright. Getty Images also intends to rely on indirect and/or “subconscious” copying arising from the fact that Stable Diffusion was trained on the copyrighted images. We will continue to monitor this high-profile case.