Healthcare · Industry Guide

EU AI Act for Healthcare: What AI Diagnostics, Clinical Tools, and MedTech Must Do Before August 2026

Healthcare AI faces dual regulation. AI in medical devices is high-risk under both the EU AI Act and the Medical Devices Regulation. AI outside medical devices still faces Annex III classification. This guide maps both paths.

Published: 18 March 2026Last updated: 18 March 2026Verified against: eu-ai-rules-engine v2.4Author: Abhishek G Sharma
EU AI Act healthcare compliance showing dual regulation paths via Annex I medical devices and Annex III health services

Two Paths to High-Risk: Annex I (Medical Devices) and Annex III (Health Services)

Healthcare AI can be classified as high-risk through two completely independent routes under the EU AI Act. Getting this wrong means you're either building compliance against the wrong deadline or missing obligations entirely. Can your regulatory affairs team explain right now which route each of your AI systems falls under?

Route 1: Annex I — AI as a Safety Component of a Medical Device

If your AI system is a medical device (or a safety component of one) regulated under the Medical Devices Regulation (MDR 2017/745) or In Vitro Diagnostics Regulation (IVDR 2017/746), it falls under Annex I of the EU AI Act. The AI Act requires a conformity assessment that integrates with the existing MDR/IVDR process — you don't run two separate assessments; you extend the MDR process to cover AI-specific requirements. This route becomes enforceable August 2, 2027, one year later than Annex III. Examples include AI-powered diagnostic imaging (radiology AI), AI clinical decision support classified as SaMD (Software as a Medical Device), and AI-driven patient monitoring devices. If your AI is classified as a Class IIa, IIb, or III medical device under MDR, it almost certainly falls under this route.

Route 2: Annex III — AI Affecting Access to Healthcare Services

Annex III Area 5 covers AI systems used to evaluate access to essential services, including healthcare. This captures AI that isn't a medical device but still affects patient access: AI triage systems, patient risk scoring for resource allocation, AI-powered appointment prioritisation, and AI determining insurance coverage for health services. Annex III Area 5(c) also covers AI for evaluating and classifying emergency calls and dispatch prioritisation. This route becomes enforceable August 2, 2026.

The Boundary Matters

AI classified as a medical device (Annex I): MDR conformity assessment plus AI Act requirements, enforceable August 2027. AI not classified as a medical device but affecting health service access (Annex III): AI Act conformity assessment only, enforceable August 2026. AI that's neither — hospital administrative AI, scheduling without patient impact — is likely minimal or limited risk with transparency obligations only.

Healthcare AI Type Regulatory Route Enforcement Date Assessment Type
AI diagnostic imaging (SaMD)ANNEX I Medical device2 Aug 2027Extended MDR conformity assessment
AI clinical decision support (SaMD)ANNEX I Medical device2 Aug 2027Extended MDR conformity assessment
Patient triage / resource allocation AIANNEX III Health service access2 Aug 2026AI Act conformity assessment
Emergency call classification / dispatch AIANNEX III Area 5(c)2 Aug 2026AI Act conformity assessment
Health insurance coverage AIANNEX III Area 5(b)2 Aug 2026AI Act + FRIA
Hospital admin AI (staff scheduling, supply chain)MINIMAL RISKTransparency onlyNo conformity assessment

Regulatory signal: France's CNIL and HAS (Haute Autorité de Santé) published a joint draft guide on AI in healthcare for consultation on March 5, 2026. It provides sectoral interpretation of AI Act and GDPR obligations in the medical context. Currently non-binding, but signals regulatory direction. Referenced in eu-ai-rules-engine v2.4.

Who's the Provider and Who's the Deployer in Healthcare AI?

The provider/deployer distinction is critical in healthcare because the obligations are dramatically different, and getting it wrong can flip your entire compliance programme upside down. I've seen MedTech startups assume they're deployers when they're actually providers — that's not a minor error, it's a Category 1 misclassification.

MedTech / Digital Health Company Building the AI

You're the provider. You bear the heaviest obligations: risk management system (Article 9), data governance (Article 10), technical documentation per Annex IV, human oversight design (Article 14), conformity assessment, CE marking, EU database registration, and post-market monitoring. For medical device AI, your existing MDR Quality Management System under Article 10 MDR provides a foundation, but the AI Act adds specific requirements around bias testing, data governance, and cybersecurity under Article 15.

Hospital / Health System / Clinic Deploying the AI

You're the deployer. Article 26 obligations apply: use per instructions, human oversight in the clinical context, monitoring, logging, AI literacy for clinical staff, and incident reporting. Clinical staff using AI diagnostic tools must understand the tool's limitations and be competent to override its recommendations. That's both an AI Act requirement and a patient safety imperative.

Trap: If a hospital modifies the AI — fine-tunes on local patient data, changes its intended use, retrains on local datasets — it may become a provider under Article 25. Use the Accidental Provider Classifier to check if your modifications cross the line.

Role Who Key Obligations Risk of Misclassification
ProviderMedTech company, SaMD developer, digital health startupArt. 9–15, Annex IV documentation, conformity assessment, CE marking, post-market monitoringAssuming you're a deployer when you're building the system
DeployerHospital, clinic, health systemArt. 26: use per instructions, human oversight, FRIA, monitoring, AI literacy, incident reportingModifying AI and inadvertently becoming a provider
Health insurerInsurer using vendor AI for coverage/pricingArt. 26 deployer duties + mandatory FRIA for health service accessNot recognising health AI as high-risk
Healthcare AI classification decision tree showing Annex I medical device route and Annex III health service access route

Decision map: how healthcare AI systems split between Annex I (medical device) and Annex III (health service access) classification routes.

Healthcare-Specific Deployer Obligations

If you're a hospital or health system deploying high-risk AI, here's what Article 26 actually means in a clinical setting. These aren't abstract requirements — they translate into specific operational changes that affect clinical workflows.

Human Oversight in Clinical Context

Clinical AI must support, not replace, clinical judgement. The clinician must be the final decision-maker. Document which clinician reviews AI output, what training they've received on the specific tool, how they override it, and what happens when AI and clinician disagree. Automation complacency is acute in healthcare — clinicians may over-trust AI recommendations under time pressure. That's the real risk, and regulators know it.

Data Governance for Health Data

Health data is "special category data" under GDPR Article 9. Processing requires explicit consent or another Article 9(2) legal basis. The AI Act's data governance requirements under Article 10 compound on GDPR: training, validation, and testing data must be relevant, representative, and as error-free as possible. Patient data used as AI input must satisfy both GDPR and AI Act data quality requirements simultaneously.

Triple Incident Reporting Burden

Serious incidents must be reported to the national market surveillance authority (AI Act Article 62), potentially the national competent authority for medical devices (MDR), and the data protection authority (GDPR, if a personal data breach). A single AI failure involving patient data can trigger three separate notification obligations. Does your incident response playbook cover all three channels?

AI Literacy for Clinical Staff

Already enforceable since February 2, 2025. Clinical staff using AI tools need training specific to what the tool does, what data it uses, its accuracy and limitations, how to interpret and override outputs, and how to report issues. Generic "AI awareness" training doesn't satisfy Article 4. The training must be tool-specific.

Related guides: For the full deployer framework, see the High-Risk AI Deployer Guide. For health insurance AI specifically, see EU AI Act for Insurance.

FAQ: EU AI Act for Healthcare Organisations

Is AI diagnostics high-risk under the EU AI Act?
If the AI is classified as a medical device under MDR (Class IIa or above), it's high-risk via Annex I, enforceable August 2, 2027. If it's not a medical device but affects access to health services, it may be high-risk via Annex III, enforceable August 2, 2026. Each system must be classified independently. Use the Compliance Checker to assess your systems.
Do I need separate conformity assessments under MDR and AI Act?
No. For AI medical devices, the AI Act conformity assessment integrates with the existing MDR process. You extend your MDR assessment to include AI-specific requirements — bias testing, data governance under Article 10, cybersecurity under Article 15 — rather than running two parallel assessments.
What about clinical decision support tools that aren't medical devices?
Software intended to provide information to support clinical decisions, without itself providing a diagnosis or treatment recommendation, may not qualify as a medical device under MDR. But it may still be classified as high-risk under Annex III if it affects access to healthcare services. Classify each system on its own merits.
Is hospital scheduling or administrative AI covered?
Only if it affects patient access to care. AI prioritising patients for surgery or determining emergency response priority falls under Annex III Area 5(c). Pure administrative AI — staff scheduling, supply chain — is likely minimal risk with transparency obligations only.
What is the CNIL-HAS healthcare AI guide?
A joint draft guide from France's data protection authority (CNIL) and health authority (HAS), published for consultation on March 5, 2026. It interprets AI Act and GDPR obligations in the medical context. Currently non-binding but useful as a reference. Tracked in eu-ai-rules-engine v2.4.
When do healthcare AI obligations take effect?
Annex III healthcare (access to services): August 2, 2026. Annex I medical devices: August 2, 2027. AI literacy under Article 4: already enforceable since February 2, 2025. The Digital Omnibus may shift Annex III to December 2027 but isn't law yet.

All Healthcare AI Compliance Tools

AS

Abhishek G Sharma

Founder & CEO, Move78 International Limited. 20+ years in cybersecurity and risk management. ISO 42001 LA, ISO 27001 LA, CISA, CISM, CRISC, CEH, CCSK, CAIGO, CAIRO.

Healthcare AI Compliance Is Complex. We Can Help.

E2 Workshop ($999): builds your compliance framework covering AI Act + MDR intersection. Advisory ($4,999): full MDR + AI Act navigation for MedTech and digital health companies.

View Workshops & Advisory →
Disclaimer & Limitations

This guide is for educational and informational purposes only. It does not constitute legal, medical, or regulatory advice. EU AI Compass tools are educational aids, not certified compliance instruments. Consult qualified legal counsel, your notified body, and national competent authority before making compliance decisions. Move78 International Limited is not a law firm, medical device manufacturer, or authorised compliance service provider. All regulatory references are accurate as of the publication date based on eu-ai-rules-engine v2.4. The Digital Omnibus is a proposal, not enacted law. The CNIL-HAS guide is draft and non-binding.

Sources & Legal Basis