Thank you to all who joined us for our December 13 panel titled the “Race to Regulate.” In case you missed it, unpack this year’s pivotal legal challenges impacting the 2023 — and 2024 — digital legal landscape in our Year in Review Pocket Book.
MedTech Europe Proposes Changes in the IVDR and MDR. On November 7, the European trade association for the medical technology industry, MedTech Europe, published a position paper proposing changes to the In-Vitro Diagnostic Medical Devices Regulation (IVDR) and Medical Devices Regulation (MDR). In the position paper, MedTech Europe outlines what it believes are the structural issues with the regulations, stressing that they are causing innovation to be hampered. The structural issues identified are:
- The unpredictability and inefficiency of the certification processes regarding the information expected from companies, the requirements, and the timelines
- The inefficiencies caused by the existing decentralized system of notified bodies
According to MedTech, these issues risk widening a gap in access to medical technology, and MedTech proposes certain measures, in particular:
- Introducing an efficient CE Marking System that guarantees access to devices and innovations, including solutions such as cutting down on bureaucracy or fully digitizing the EU system, allowing digital labelling
- Incorporating an innovation principle, including solutions such as creating accelerated assessment pathways for medical technologies innovations addressing unmet medical needs or pre-certification access models
- Introducing an Accountable Governance Structure that is able to coordinate and manage the decentralized network of notified bodies, take system level decisions, issue guidance, and represent them within Europe and globally
The MDCG Issues Revised Position Paper on Compliance With the MDR and IVDR. On November 29, the Medical Device Coordination Group (MDCG) published a revised version of the notice to manufacturers and notified bodies to ensure timely compliance with MDR and IVDR requirements that was published in June 2022. In the position paper, the MDCG calls on manufacturers to transition to the regulations and submit their certification applications as soon as possible, as delaying submissions could lead to a backlog of requests to notified bodies, resulting in delays and, ultimately, in product shortages. The call particularly urges manufacturers of class D IVD devices, which must transition to the IVDR by May 2025.
In addition, and in line with some of the recommendations from industry above, the MDCG calls on notified bodies to make the certification process more efficient, transparent, and predictable, and highlights the importance of properly guiding and assisting manufacturers in the conformity assessment application. The MDCG also calls on the notified bodies to regularly provide data on the situation regarding the certifications, and to increase the transparency about their capacity and timelines, ideally on a common website compiling that of every other notified body in Europe.
Updates on the Regulation of AI in the UK. On November 16, the UK government published its response to the interim report from the Science, Innovation and Technology Committee dated August 31, 2023 (discussed in our September digest). The interim report highlighted 12 key challenges in relation to the governance of AI and the government’s progress in addressing these challenges, as well as actions set out in its white paper published in March 2023 (see our April digest). The most notable updates are:
- The intent to still not introduce new AI-specific legislation at this stage and to continue an evidence-based and iterative approach to regulation
- The establishment of a “Central AI Risk Function” within the Department for Science, Innovation and Technology to identify and monitor developing risks from AI and coordinate their mitigation using broad expertise
- The plan to pilot a multi-agency advice service known as the “DRCF AI and Digital Hub” for innovators of AI technologies to access tailored support from multiple regulators simultaneously
- The establishment of the “AI Safety Institute” (previously called the Frontier AI Taskforce) to provide insights into the capabilities and risks of frontier AI and foundation models
The government’s response to the AI white paper consultation, with updates on its regulatory approach to AI, is expected before the end of 2023.
On the topic of AI regulation, on November 23, a Private Members’ Bill was introduced to the House of Lords. The main purpose of the bill is to establish a central AI authority to coordinate and monitor the regulatory approach to AI, while promoting transparency, reducing bias, and balancing regulatory burden against risk. This largely tracks the government’s white paper, but seeks to introduce the terms into law. While only a minority of Private Members’ Bills become legislation, it is clear there is a growing debate in the UK about whether the proposed approach to regulation is correct.
European Parliament Agrees on Text of the EHDS Regulation. On November 28, the members of the European Parliament working on the European Health Data Space regulation reached an agreement on the text for the regulation. The agreed text aims to promote the use of aggregated health data for public interest reasons, but introduces limits on the use of these data, including bans to its use (e.g., in advertising or sharing with third parties), and making access subject to a request to national bodies.
The agreed text includes the need to obtain explicit permission from patients to use aggregated sensitive health data, provides patients with an opt out mechanism for other health data, and the option to challenge a decision of a health data access body, either personally or through a non-profit organization on their behalf. In addition, the agreed text underlines the importance of providing for sanctions in case of misuse of personal health data and includes the obligation to store health data in the EU. The text will have to be formally adopted by the European Parliament in a plenary vote in December and, if approved, will then need to be adopted by the Council.
Council of the European Union Adopts Data Act. On November 27, the Council of the European Union formally adopted the regulation on harmonized rules on fair access to and use of data (Data Act), following the formal adoption by the European Parliament on November 9. The Data Act aims to make data more accessible and ensure fair access and use, and establishes harmonized rules on sharing data generated through the use of connected products and services. The adopted text includes measures related to:
- Trade secrets, including a definition and adequate safeguards
- Data sharing, including measures to prevent abuse of contractual imbalances in data sharing contracts, safeguards against unlawful data transfers, and the possibility for the European Commission, the European Central Bank, and EU bodies to access and use data held by private sector in case of public emergencies or public interest
- Governance, including an option for member states to have a data coordinator authority, which would act as a single point of contact
The Data Act will now be published in the EU Official Journal in the following weeks, and will enter into force 20 days after its publication. Note that the application of the new rules will be 20 months after its entry into force.
Global Guidelines for AI Security Published. On November 27, the UK’s National Cyber Security Centre published its Guidelines for Secure AI System Development, which were developed in collaboration with the U.S. Cybersecurity and Infrastructure Security Agency. The guidelines have been endorsed by cybersecurity agencies from 16 additional countries, including those in France, Germany, and Japan, and are intended to assist developers make informed cybersecurity decisions at all stages of the development process and beyond. The guidelines are split into four key areas (secure design, secure development, secure deployment, and secure operation and maintenance) with suggested considerations and mitigations to help improve security at each stage of the AI system life cycle. The guidelines are voluntary but all stakeholders are urged to read and take account of the guidelines. It is possible that the guidelines will inform the minimum cybersecurity requirements that are expected to be imposed through the proposed EU AI Act and AI Liability Directive.