Welcome to the latest installment of Arnold & Porter’s Virtual and Digital Health Digest. This digest covers key virtual and digital health regulatory and public policy developments during February and early March 2026 from the the United Kingdom, and European Union.

February 2026 saw a period of substantial regulatory activity across both the UK and EU, particularly in relation to AI governance, medical technologies, and data protection. In the UK, the policy landscape continued to evolve with initiatives affecting the regulation of medical devices, clinical research, and AI deployment. Key developments included the Medicines and Healthcare products Regulatory Agency’s (MHRA) consultation on the indefinite recognition of CE-marked medical devices, record levels of medical device testing, and the Prescription Medicines Code of Practice Authority’s (PMCPA) revised guidance on the use of social media. AI remained a major focus in the UK, with the UK government’s response to the consultation on the AI Management Essentials tool, increased industry involvement in the UK AI Security Institute’s alignment program, and feedback relating to governmental research on AI adoption across UK businesses. Additional international collaboration efforts included UK engagement at the India AI Impact Summit and an expanded science and technology partnership with Japan, as well as the launch of the first-ever AI Strategy for UK Research and Innovation.

At the EU level, regulatory activity centered predominantly on data protection, with the adoption of several important outputs from the European Data Protection Board (EDPB) and European Data Protection Supervisor (EDPS). These included a joint opinion on the European Commission’s proposed Digital Omnibus amendments, a report following public consultation on anonymization and pseudonymization, and the publication of the EDPB’s 2026-2027 work program. These developments indicate a renewed emphasis on maintaining high standards of data protection while ensuring clarity for organizations navigating complex digital and AI-driven ecosystems.

In parallel, the UK implemented major reforms to its domestic data protection framework through the Data (Use and Access) Act 2025, which entered into force this month. Together, these UK and EU developments highlight a regulatory environment increasingly focused on the safe deployment of advanced technologies, the strengthening of data protection safeguards, and the continued modernization of medical device oversight.

Regulatory Updates

PMCPA Publishes Revised Guidance for the Use of Social Media. The PMCPA has issued revised guidance on the use of social media, reflecting the rising number of code breaches linked to online activity and the growing complexity of digital engagement. The update replaces the 2023 version with a redesigned, web based format that includes Q&As, practical examples, and links to PMCPA cases, making it easier for companies to navigate the rules in an evolving social media environment. Notably, the guidance introduces expanded sections on clinical trial recruitment, responding to misinformation, news for an investor audience/the media, pharmacovigilance responsibilities, and engaging with influencers. For more details on the guidance, read our February 2026 BioSlice Blog.

MHRA Consultation on Indefinite Recognition of CE-Marked Medical Devices. The MHRA has launched a consultation seeking views on proposed changes to the recognition of EU CE marked medical devices in Great Britain, as part of wider efforts to protect patient access to safe and effective medical technologies and refine the UK’s post Brexit regulatory landscape. The proposals include: (1) extending the existing transitional arrangements for devices certified under the former Medical Devices Directive to align with the EU’s transition timelines under the Medical Devices Regulation 2017/745 (EU MDR), (2) the indefinite recognition of devices compliant with the EU MDR and the In Vitro Diagnostic Medical Devices Regulation 2017/746 (EU IVDR), and (3) a proposed international reliance route for devices that comply with the EU MDR or EU IVDR but are classified at a higher risk level under UK MDR 2002. For more details, read our February 2026 BioSlice Blog. In the meantime, the MHRA has published an infographic of the current timelines in place for placing CE-marked medical devices on the Great Britain market.

UK Medical Device Testing Hits Record High. The MHRA has announced that UK medical device testing reached a record high in 2025, with a 17% rise in approved clinical investigations. This growth has been driven by investments in neurotechnology and a surge in AI-powered medical devices. These developments form part of the MHRA’s broader work to promote innovation and remove barriers for smaller companies, including initiatives such as a fee waiver pilot, early market access to promising devices, and enhanced support for high-impact technologies.

MHRA Sponsors a New Standard on Clinical Studies for Digital Mental Health Technologies. The MHRA has sponsored the British Standards Institute to develop a standard providing recommendations for performing clinical studies to generate clinical evidence for digital mental health technologies. The MHRA intends for the standard to apply to the pre-market phase and to the real-world data collection in the early implementation, post-market phase. The standard is likely to include factors such as controls, sample characteristics, safety, effectiveness, and engagement end points, as well as follow up periods. A public consultation on a draft will take place in mid-2026.

The International AI Safety Report 2026 Published. Released on February 3, 2026, the report, which was led by Turing Award winner Yoshua Bengio and authored by over 100 international experts, provides a scientific assessment of general-purpose AI capabilities, focusing on three key questions: (1) what can general-purpose AI do today, (2) what emerging risks does it pose, and (3) how can those risks be mitigated. The report seeks to support policymakers in addressing the difficulties of gathering and evaluating evidence on the risks associated with rapidly developing and increasingly capable AI systems, a challenge described as the “evidence dilemma.” It highlights that performance remains uneven and “jagged,” with capabilities varying widely across tasks and contexts, as AI systems that deliver in controlled settings such as pre deployment evaluations often perform less effectively in real world conditions. In order for general purpose AI to reach its full potential, the report emphasizes the need to prioritize the effective management of risks such as malicious use, malfunctions, and systemic disruption.

UK Government Publishes Response to Consultation on AI Management Essentials (AIME) Tool. In November 2024, the UK government sought feedback on AIME, a self-assessment tool that distils key principles from existing AI governance frameworks to help businesses establish robust governance and management practices for AI development and use. The consultation outcome was published on February 6, 2026. An analysis of 65 responses indicated that organizations view AIME as a valuable foundation for AI governance, although concerns were raised regarding its complexity for non expert users, particularly small and medium enterprises (SMEs) that struggled to operate under the tool’s size and occupation agnostic approach. This feedback will inform both the refinement of the tool and the development of further guidance focused on the foundational governance measures necessary to support responsible AI deployment, with a specific emphasis on improving accessibility for SMEs.

OpenAI and Microsoft Join AI Security Institute’s Flagship Alignment Project. Contributions from OpenAI and Microsoft have increased the total funding available through the UK AI Security Institute’s initiative to more than £27 million, supporting international research that aims to enhance the international reliability and safety of AI systems. The project combines funding for research, access to compute infrastructure and ongoing academic mentorship to drive progress on alignment. The first Alignment Project grants have been awarded to 60 projects from across eight countries, with a second round expected to open later this year.

UK Government Publishes Analysis of Research on AI Adoption. Consistent with the ambitions set out in the January 2025 AI Opportunities Action Plan to embed AI across the UK economy, the government conducted research to assess the use of AI among UK businesses. The study, published on February 13, 2026 and based on 3,500 interviews (weighted to reflect business size and sector), indicates that AI adoption remains modest, with only 16% of businesses using at least one AI technology and many citing a lack of identified need and limited AI skills as key barriers. Businesses reported the greatest difficulties when implementing agentic AI, while natural language processing and text generation presented comparatively fewer barriers. For organizations that raised ethical concerns, these concerns were regarded as the most significant obstacle to adoption, followed by high costs and regulatory uncertainty. While the research demonstrates varying levels of trust in AI systems, most organizations remain willing to explore new technologies, with 75% of businesses reporting that AI has increased workforce productivity.

UK and International Partners Support Commitment To AI at India AI Impact Summit. The UK government, together with international partners, has engaged in discussions on the potential for AI to drive growth, create new jobs, improve public services, and deliver benefits globally. These discussions form part of the UK’s broader collaboration with India to advance shared priorities in science, technology, and innovation. The New Delhi Declaration on AI, presented at the India AI Impact Summit, seeks to build an inclusive, accessible and efficient global AI framework. The declaration has been endorsed by 92 countries, including the UK, and is expected to be signed at an international summit later this year.

UK and Japan Strengthen Science and Technology Partnership. On February 3, 2026, the UK and Japan announced a package of life sciences and technology collaborations, placing a strong emphasis on developing treatments for rare genetic diseases. The projects include an £11 million investment into drug manufacturing in the UK, undertaking joint quantum technologies research to address challenges in drug discovery, and a multi-year strategic partnership to establish a national pilot focused on transforming screening for rare diseases.

First-Ever AI Strategy for UK Research and Innovation. On February 19, 2026, the UK government announced the first-ever AI Strategy for the UK’s largest public research funder: UK Research and Innovation (UKRI). The investment is intended to ensure AI delivers “cutting-edge science and research efforts” in the UK. Under the new strategy, UKRI will provide up to £137 million as part of the government’s AI for Science Strategy to back AI-enabled scientific discovery starting with drug discovery and new treatments. It will also help to deliver £36 million to upgrade the University of Cambridge’s “DAWN” supercomputer supporting breakthroughs in areas like healthcare and environmental modelling.

Privacy Updates

Implementation of UK Data (Use and Access) Act. The Data (Use and Access) Act 2025 (DUAA) represents the UK’s first major reform of data protection law since leaving the EU. On February 5, 2026, most of the data protection provisions of the DUAA came into force. The reforms expand the use of automated decision-making capabilities, but this does not apply to special categories of data such as health information. The new standard for international transfers has changed from ensuring UK General Data Protection Regulation (GDPR) protections are “not undermined” to requiring protection that is “not materially lower” than UK standards. For more details, see our February 2026 BioSlice Blog and May 2025 Advisory.

EDPB and EDPS Issue Joint Opinion on the European Commission’s Proposal To Amend the Digital Legislation (Digital Omnibus). The joint opinion, adopted on February 10, 2026, follows a formal consultation by the Commission on its proposal for a Digital Omnibus. (See our December 2025 Digest.) While supporting the efforts to reduce compliance burdens, the EDPB and EDPS stress that simplification must not weaken key safeguards of the EU GDPR. In particular, the EDPB and EPDS urge the European Parliament and Council of the European Union not to adopt: (1) the amended definition of personal data, which would assess identifiability based on the means reasonably available to the specific company, which, according to the joint opinion, could narrow the GDPR’s scope and create legal uncertainty, and (2) the proposal to include an exhaustive list of permitted cases for automated decision-making, whereas currently fully automated decision-making is prohibited. At the same time, the EDPB and EPDS support: (1) raising the threshold for personal data breach notifications to cases “likely to result in a high risk” to individuals’ rights, and (2) the development of EU-level Data Protection Impact Assessment (DPIA) tools, provided supervisory authorities retain primary responsibility. Further details on the joint opinion and Commission proposal can be read in our February 2026 Advisory.

EDPB Publishes Report on Results of Public Consultation on Anonymization and Pseudonymization. The report summarizes the feedback received during an event held in December 2025 to support the preparation of EDPB guidelines on anonymization and pseudonymization, following the Court of Justice of the European Union (CJEU) judgment in Case C 413/23 P. In that judgment, the CJEU clarified how identifiability must be assessed when determining whether pseudonymized data qualify as personal data. (See our October 2025 Digest and September 2025 BioSlice Blog.) Participants, who were mainly companies, highlighted the need for further guidance on joint controllership scenarios, controller-to-controller/third-party data sharing, and on specific contexts such as clinical trials. Participants also requested clarification on when data processing agreements are required, the concept of “means reasonably likely to be used” to identify individuals, and the safeguards that can limit re-identification risks. Debate also arose on topics such as whether online identifiers should always be treated as personal data and whether a separate legal basis under Article 6 GDPR is required when transmitting pseudonymized data.

EDPB Publishes Its Work Program for 2026-2027. The work program aims to facilitate compliance with the EU GDPR and sets out the actions that the EDPB plans to undertake over the next two years. Key actions of the EDPB include developing guidance on AI, telemetry, and diagnostic data; further guidance on data anonymization; and developing guidelines on the interplay between the AI Act and the GDPR, as previously announced by the EDPB. The EDPB also expects to adopt guidance on data pseudonymization and on data processing for research purposes. In addition, the EDPB plans to publish practical templates to support SMEs, including templates for DPIAs, legitimate interest assessments, records of processing activities, and privacy notices and policies. The EDPB also intends to issue opinions on standard and ad-hoc contractual clauses.

IP Updates

UK Supreme Court Decision in Emotional Perception AI Limited v. Comptroller General of Patents, Designs and Trade Marks. On February 11, 2026, the UK Supreme Court handed down its much-anticipated judgment in Emotional Perception AI Limited v. Comptroller General of Patents, Designs and Trade Marks [2026] UKSC 3. Following the approach endorsed by the Enlarged Board of Appeal of the European Patent Office (EPO) in its G1/19 decision, the UK Supreme Court firmly rejected the long-standing Aerotel four-step test for assessing patentability in the UK for failing to be a good-faith implementation of the European Patent Convention (EPC). In doing so, the UK Supreme Court has now, at least in part, aligned the UK’s approach to computer-implemented inventions with the EPO.

The UK Supreme Court has also confirmed that Artificial Neural Networks constitute a “program for a computer” and thereby fall within the exclusion to patentability under Article 52(2)(c) EPC. Whether the claimed subject matter falls within that exclusion depends on the application of the “any hardware” approach endorsed in G1/19, according to which an application will not be excluded from patentability if it embodies or involves physical hardware within the subject matter of the claims. Applying the G1/19 decision has also introduced an “intermediate step” in the UK, whereby elements not contributing to (or interacting with) the invention’s technical character are excluded when subsequently considering the novelty and inventive step.

This decision represents a major shift in the UK approach to patentability of AI-related and computer-implemented inventions.