Explore by topic
Start Here — Classify Your AI Systems
Determine whether the EU AI Act applies to you, your role under the Act, and your risk classification.
EU AI Act Quick Checker
Fast triage for one AI use case. Identify likely role, risk path, key obligations, and deadline direction in about 5 minutes. Runs locally in your browser.
Run Quick Check →EU AI Act Detailed Applicability Scorer
Deeper applicability analysis for one system or an AI portfolio. Screen EU nexus, role, likely risk category, and next-step priority with more structured outputs.
Open Detailed Scorer →Quick Risk Quiz
Rapid triage. Get a preliminary risk classification in under 2 minutes. Ideal for initial screening before the full compliance check.
Take the Quiz →Deep Dive — Specific Obligations
For organizations that already know their classification. Analyze exemptions, role changes, and generate operational compliance records.
Article 6(3) Exemption Generator
Classified as high-risk under Annex III? This tool tests whether your system qualifies for an exemption under Article 6(3) and generates a defensible rationale document.
Check Exemptions →Accidental Provider Classifier
Modified a high-risk AI system? Under Article 25, deployers who substantially alter an AI system become providers with full provider obligations. Audit your exposure.
Audit Modifications →Human Oversight Log
Article 14 requires documented human oversight. This tool generates an immutable intervention record when an operator concurs, overrides, or escalates an AI recommendation.
Log a Decision →Article 26 Operations Scorer
Grade your deployer operational readiness under Article 26. Diagnose execution gaps in human oversight, incident reporting, data governance, and FRIA completion.
Score Operations →Local FRIA Generator
Article 27 requires deployers of high-risk AI to complete a Fundamental Rights Impact Assessment before deployment. Generate a structured FRIA document locally.
Generate FRIA →Input Data Validator
Article 26(4) requires deployers to ensure input data is relevant and sufficiently representative. Validate your data governance practices against deployer obligations.
Validate Inputs →Automation Complacency Assessor
Are your operators rubber-stamping AI outputs? Assess automation complacency risk against Article 14 human oversight requirements and generate a remediation plan.
Assess Complacency →Art 6(3) Exemption Self-Assessment
Generate a defensible self-assessment record for Article 6(3) exemption claims. Documents the material influence analysis required to justify non-high-risk classification.
Evaluate Influence →Article 50 Transparency Validator
Validate your AI system against Article 50 multi-layered transparency obligations. Covers chatbot disclosure, deepfake labelling, emotion recognition notice, and synthetic content marking.
Validate Transparency →Deployer Obligation Self-Assessment
Your vendor's compliance does not make you compliant. Map your specific deployer duties under Articles 26, 29, 50, and FRIA requirements. Five diagnostic questions with scored gap analysis.
Assess Deployer Duties →AI Content Marking Compliance Checker
Check your content pipeline against Article 50 Code of Practice Draft 2. Assess metadata layer, watermarking layer, labelling UI, and detection capabilities. Deadline: August 2026 (not delayed by Omnibus).
Check Content Marking →Governance & Risk — Build Your Program
For CISOs, DPOs, and compliance officers building an AI governance program. Assess framework gaps, discover shadow AI, and vet vendors.
ISO/NIST Gap Analyzer
Already certified in ISO 42001, ISO 27001, or using NIST AI RMF? Identify the specific gaps between your existing framework and EU AI Act requirements.
Analyze Gaps →Shadow AI Discovery Protocol
Discover unauthorized AI tools being used across departments. Assess data exposure risk (public, internal IP, PII) and generate a department-level AI asset declaration.
Start Discovery →AI Vendor Risk Screener
Vet AI vendor Data Processing Agreements before procurement. Screens for training data opt-out, data residency, retention periods, and sub-processor transparency.
Screen a Vendor →Agentic AI Bounds Definer
Define autonomy boundaries for agentic AI systems. Map decision authority levels, escalation triggers, and human override requirements against EU AI Act obligations.
Define Bounds →RAG Data Hygiene Screener
Screen your Retrieval-Augmented Generation pipeline for data governance risks. Assess source provenance, PII exposure, staleness, and Article 10 compliance gaps.
Screen RAG Pipeline →Bias Testing Safe Harbor Protocol
Structure your bias testing under Article 4a safe harbor provisions. Document methodology, protected attributes, retention controls, and DPA coordination for compliant bias audits.
Design Protocol →AI Literacy Training Planner
Generate a role-based AI literacy training plan for Article 4 compliance. Map who needs what training, at what depth, by when, and build the evidence checklist auditors expect.
Plan Training →Sector-Specific — Annex III High-Risk
Industry-specific validators for Annex III high-risk use-case areas and Annex I product safety systems. Assess whether your sector AI systems trigger specific compliance obligations.
B2B Biometric Identity Validator
Evaluate biometric authentication and identity verification systems for EU AI Act compliance. Covers facial recognition, voice ID, and behavioural biometrics.
Validate Biometrics →EdTech AI Assessment Validator
Evaluate algorithmic grading, AI proctoring, and adaptive learning systems against Annex III Area 3 obligations for education and vocational training.
Validate EdTech AI →Promotion & Termination Validator
Evaluate AI-driven performance management, promotion decisions, and termination recommendations under Annex III Area 4 employment obligations.
Validate HR AI →Fraud vs. Credit Scoring Delimiter
Segregate exempt fraud detection AI from high-risk credit scoring algorithms. Determine which financial AI systems trigger full Annex III Area 5 compliance.
Delimit Scope →Insurance Underwriting Assessor
Evaluate life and health insurance AI pricing models for Annex III Area 5c compliance. Assess risk scoring, premium calculation, and claims triage algorithms.
Assess Underwriting AI →IIoT Safety Component Validator
Validate AI safety components embedded in industrial machinery, medical devices, and regulated products under Annex I. Assess CE marking, conformity assessment, and product safety obligations.
Validate Safety AI →Learning & Training
Longer-form interactive training and decision simulations. Practice EU AI Act governance choices and build team competence with exportable training records.
EU AI Act Training Platform
Zero-cloud training with interactive modules covering roles, risk tiers, high-risk controls, deployer obligations, and transparency requirements. Includes quizzes and exportable training records for audit evidence.
Start Training →Scenario Simulations
Branching decision simulations for deployers across hiring, credit, insurance, critical infrastructure, biometrics, law enforcement, and more. Practice procurement, oversight, monitoring, and incident response choices with real-time scoring.
Run Simulations →Need More Than Tools?
These tools identify your obligations. Our compliance toolkits give you the implementation framework — structured assessments, NIST AI RMF crosswalks, vendor tracking templates, and audit-ready documentation.
All tools are for educational and informational purposes only. Not legal advice. Results should be reviewed by qualified legal counsel.
Use these tools with the right references
Pair browser-based workflows with the FAQ, glossary, and key operational guides so teams can classify systems and document the next step faster.
Quick references
Featured workflows