On 7 May 2026, the European Parliament and Council announced a provisional political agreement on the AI Act portion of the Digital Omnibus package. According to the European Parliament’s press release and the Council’s press release, the agreement aims to make compliance more workable, while maintaining its main provisions and risk-based approach. The agreed text has not yet been published, and the agreement remains provisional pending formal adoption by both institutions, which co-legislators intend to complete before 2 August 2026, the date on which the AI Act’s original high-risk system obligations were due to become applicable.

This blog post sets out what the press release describes as having been agreed, and flags the points most relevant to pharmaceutical and MedTech companies. As we noted in our earlier analysis, the Digital Omnibus on AI Proposal carries significant implications for life sciences companies. We will provide a fuller assessment once the agreed text is available and adopted.

Agreed Changes

New deadlines for high-risk AI systems and watermarking

The press release states that obligations on high-risk AI systems will apply:

  • From 2 December 2027: for AI systems with a high-risk use case under Annex III (including those involving biometrics and systems used in critical infrastructure, education, employment, law enforcement and border management).
  • From 2 August 2028: for AI systems used as safety components and covered by EU sectoral legislation (Annex I) on safety and market surveillance.

The fixed deadlines bring welcome legal certainty for companies planning their AI compliance programmes. Watermarking obligations on AI-generated content are also delayed to 2 December 2026.

Reducing overlap with sectoral legislation

According to the press releases, a compromise was agreed to address overlaps between the AI Act and legislation for industrial AI sectors including medical devices, toys, lifts, machinery and watercraft. For machinery specifically, a full exemption from direct AI Act applicability was agreed, with the European Commission empowered to adopt delegated acts to add AI-specific requirements under the Machinery Regulation. For medical devices and IVDs a more limited  mechanism was agreed to limit AI Act requirements where the MDR and IVDR already cover similar ground, to be implemented through implementing acts with European Commission guidance. The details of this mechanism have not yet been announced.

Other changes

  • Narrowed definition of ‘safety component’: products with AI functions that only assist users or optimise performance will not automatically face high-risk obligations, provided their failure or malfunction does not create health or safety risks.
  • Bias detection and correction: Personal data may be processed where strictly necessary to detect and correct biases, with appropriate safeguards, in both high-risk and non-high-risk AI systems.
  • SME and SMC support: Certain exemptions previously available only to SMEs are extended to small mid-cap enterprises.
  • GPAI enforcement: Enforcement of obligations for certain general-purpose AI systems is streamlined within the EU AI Office.
  • Ban on nudifier applications: New prohibition on AI systems creating child sexual abuse material or non-consensual intimate imagery of identifiable persons. Covers both placing such systems on the EU market and their deployment. Companies have until 2 December 2026 to comply.

Initial Observations for Life Sciences Companies

While awaiting the full text, the following observations are based on the press release summary and should be read in that light:

Deadlines for AI in medical devices and IVDs. The 2 August 2028 deadline for Annex I systems is a welcome development for companies developing or using AI-enabled medical devices and IVDs, providing substantially more time to meet high-risk AI obligations, including conformity assessment requirements.

Interaction with sectoral legislation: partial compromise for medical devices and IVDs. Unlike machinery, which received a full exemption from direct AI applicability, medical devices and IVDs did not receive a comparable carve-out and therefore are likely to remain subject to obligations under both the AI Act and the MDR/IVDR in parallel. The practical scope of the agreed mechanism, and what it means in terms of conformity assessment obligations for companies developing or using AI-enabled medical devices and IVDs, will only become clear once the agreed text and implementing acts are published. The longer-term answer to the dual-compliance question is likely to depend on the outcome of the separate MDR/IVDR revision proposal published by the European Commission in December 2025, which proposes to embed AI-specific requirements within the sectoral framework and establish the MDR/IVDR as the primary governing regime for AI-enabled medical devices.

Narrowed definition of ‘safety component’. The clarification that AI functions serving only to assist users or optimise performance will not automatically attract high-risk obligations could be relevant to a range of digital health tools and software-based diagnostics. This may include, for example, AI features in medical devices that support clinical workflows or optimise device performance, where their failure or malfunction would not create health or safety risks. The practical scope of this change will however depend on the exact wording of the final text.

Bias detection and personal data. The possibility to process personal data for bias detection and correction purposes could be significant for companies developing or deploying AI models trained on clinical or patient datasets, where the data involved is likely to constitute special category personal data under the GDPR. The safeguards attached to this provision will be a critical detail to examine once the text is available.

What Happens Next

The press release does not address all issues that were under negotiation, and a fuller picture will only emerge once the agreed text is published.  Critically, the extended deadlines for high-risk obligations  will only take effect if the agreement is formally adopted and published before 2 August 2026, the date on which the AI Act’s original high-risk system obligations were due to become applicable. Until then, those original deadlines remain legally in force, and companies should continue their compliance preparations accordingly.