Blog · March 2026 · 9 min read

High-Risk AI Systems: The Complete Annex III Checklist

Annex III of the EU AI Act lists the specific use cases where AI systems are classified as high-risk, triggering mandatory compliance with Articles 8 through 15. These requirements cover risk management, data governance, technical documentation, transparency, human oversight, accuracy, robustness, and cybersecurity. Full compliance is required by 2 August 2026 for Annex III systems.

Not sure if your system qualifies? Take the 12-question Compliance Checker for a preliminary classification.

Cybersecurity expert inspecting large data wall with magnifying glass for EU AI Act high-risk compliance
The challenge: identifying which AI systems in your portfolio are high-risk under Annex III.
Annex III High-Risk AI Systems Infographic: All 8 areas including biometrics, critical infrastructure, and employment classified under Article 6(2)
All 8 Annex III areas are equally classified as high-risk. Each triggers mandatory compliance with Articles 8-15.

Area 1: Biometrics

AI systems used for remote biometric identification (not in real-time, which is separately regulated under Article 5), biometric categorisation of natural persons, and emotion recognition systems outside the prohibited contexts. A corporate security system using facial recognition for building access, or a system categorising individuals by age group using biometric analysis for non-prohibited purposes, falls here. The key distinction from Article 5 prohibitions: these are permitted but heavily regulated use cases rather than outright bans.

Area 2: Critical Infrastructure

AI systems used as safety components in the management and operation of critical digital infrastructure, road traffic, and the supply of water, gas, heating, and electricity. This covers AI optimising power grid distribution, traffic management systems making routing decisions, and water treatment monitoring. If the AI system's failure or malfunction could pose a risk to safety or the reliable functioning of essential services, it is likely high-risk under this area.

Area 3: Education and Vocational Training

AI systems that determine access to or admission into educational institutions, evaluate learning outcomes (including AI that influences the progression of a student's education), assess the appropriate level of education for an individual, and monitor prohibited behaviour during tests. University admissions algorithms, automated essay grading systems, and AI-powered proctoring tools all fall within this area. The common thread is AI making decisions that affect an individual's educational trajectory.

Area 4: Employment, Workers Management, and Access to Self-Employment

This is one of the most commercially significant areas. It covers AI used for recruitment and candidate selection (CV screening, interview analysis), decisions affecting terms of employment (promotions, terminations, task allocation), and monitoring or evaluating worker performance and behaviour. If your organisation uses any AI-powered HR technology — from applicant tracking systems to performance analytics — it almost certainly falls here. Recruitment AI is one of the most common triggers for Annex III classification among SMEs.

Area 5: Access to Essential Private and Public Services

AI systems used to evaluate creditworthiness or establish credit scores, to evaluate and classify emergency calls (including for dispatching police, fire, medical, and emergency services), and for risk assessment and pricing in life and health insurance. This area directly affects financial services, insurance, and emergency response. Credit scoring algorithms, insurance underwriting models, and triage systems in emergency call centres all qualify. The practical implication: any financial institution using AI for lending decisions needs to treat these as high-risk systems.

Area 6: Law Enforcement

AI systems used for individual risk assessments (recidivism prediction), polygraphs and similar tools, evaluation of reliability of evidence, profiling in criminal investigations, and crime analytics. Law enforcement agencies deploying AI for any investigative or assessment purpose must comply with the full high-risk requirements. This area interfaces closely with the Article 5 prohibition on predictive policing based solely on profiling.

Area 7: Migration, Asylum, and Border Control

AI systems used for polygraphs and similar tools during immigration processing, assessment of security risks posed by individuals, examination of applications for asylum, visa, and residence permits, and identification of individuals in the context of border management. Border control agencies and immigration services deploying AI for any decision-making or risk assessment purpose fall within this area.

Area 8: Administration of Justice and Democratic Processes

AI systems used to assist judicial authorities in researching and interpreting facts and law and in applying the law to concrete cases, and AI systems used to influence the outcome of elections or referendums. Court analytics tools that recommend sentences or assess case merits, and AI systems used in political campaigning that could influence voting behaviour, both qualify.

What High-Risk Classification Means in Practice

Once a system is classified as high-risk under Annex III, the provider must implement all requirements under Articles 8-15: establish a risk management system (Article 9), meet data governance standards (Article 10), maintain technical documentation per Annex IV (Article 11), ensure automatic logging of events (Article 12), provide clear information to deployers (Article 13), design for effective human oversight (Article 14), and achieve appropriate levels of accuracy, robustness, and cybersecurity (Article 15).

For a structured approach to these requirements, our EU AI Act Compliance Toolkit provides a 62-question risk assessment aligned to each article, plus documentation templates. For an initial screening, use the 2-minute Quick Quiz to check if your system falls within any Annex III area.

Note that Article 6(3) provides an exception mechanism: a provider can argue that an AI system listed in Annex III does not pose a significant risk of harm if its output is narrow, improves a previously completed human activity, is preparatory to an assessment, or detects decision patterns without replacing human judgement. This exception requires documented justification and is not a blanket opt-out.

Digital tablet showing 85% complete Compliance Audit Checklist for EU AI Act high-risk requirements
High-risk classification triggers Articles 8-15. Structured compliance toolkits prevent gaps.

About the author: Abhishek G Sharma is the founder of Move78 International Limited and holds ISO 42001 Lead Auditor, CISA, CISM, CRISC, and CEH certifications.

Disclaimer: This article is for educational purposes only. Consult qualified legal counsel for binding compliance decisions. Last updated: March 2026.