EU AI Act high-risk obligations begin in X daysstart your readiness check EU AI Act deadline in X dayscheck readiness

Blog · March 2026 · 9 min read

CONFIRMED LAWAnnex III classification logic is current law; only the application timeline is under proposal watch

High-Risk AI Systems: Annex III Checklist and Common Misclassification Traps

Compliance analyst reviewing AI use cases against the Annex III high-risk categories under the EU AI Act
Annex III classification is about actual use cases and actual harm pathways.

Scope reminder

Confirmed law Annex III defines the main operational use cases that are treated as high-risk AI systems under the current AI Act framework.

Proposal watch A live Omnibus proposal could change the application timeline if adopted. It has not changed the classification logic of Annex III today.

Removed Weak percentage claims about “how much AI is high-risk” were stripped out because they were not carrying their evidentiary weight.

This page should do one job well: help readers decide whether a concrete use case falls into an Annex III area and what that classification means. It should not pad itself with unsupported market-share statistics.

Annex III infographic showing the main high-risk AI areas under the EU AI Act including biometrics, education, employment, credit, border control, and justice
Annex III is about concrete use cases, not vague “risky AI” branding.

The eight Annex III areas

The AI Act identifies eight core operational areas where AI systems may be classified as high-risk under Annex III. These areas are:

The point is not memorisation. The point is operational mapping. If your system affects access, ranking, eligibility, risk, or monitoring in one of these areas, you likely need deeper classification analysis rather than casual reassurance.

Where mid-market firms usually get caught

Do not forget the Article 6(3) narrow carve-out

Some Annex III systems may rely on the narrow Article 6(3) logic where the system does not pose a significant risk of harm and fits the conditions set out in the law. That is not a casual escape hatch. It requires a documented justification and should be handled as an exception analysis, not as a default assumption.

What high-risk classification triggers

Once a system is treated as high-risk, the organisation needs to deal with the control architecture under Articles 8 through 15. In practice that means risk management, data governance, technical documentation, logs, deployer information, human oversight, and robustness / cybersecurity controls that are defensible and evidenced.

Digital compliance dashboard showing audit readiness for a high-risk AI system under Articles 8 through 15 of the EU AI Act
High-risk classification is not just a label. It triggers a control and evidence burden.

Use this page correctly

Use this checklist to identify likely high-risk candidates. Then move those systems into a documented classification and control workflow. Do not use a blog post as your final legal determination.

For a portfolio-level first pass, use the 12-question Compliance Checker. For Article 6(3) scenarios, the local exemption framework is the better next step.

About the author: Abhishek G Sharma is the founder of Move78 International Limited. He holds ISO 42001 Lead Auditor, CISA, CISM, CRISC, and CEH certifications. He brings over 20 years of practitioner experience in cybersecurity, AI governance, and enterprise risk management.

Disclaimer: This analysis is for educational purposes only and does not constitute legal advice. Consult qualified counsel for binding compliance decisions. Last updated: March 2026.

Assess Your AI Systems Now

Determine your specific operational obligations under the EU AI Act with our free diagnostic tools.