EU AI Act high-risk obligations begin in X daysstart your readiness check EU AI Act deadline in X dayscheck readiness
25+ Interactive Assessments

EU AI Act Compliance Tools

Classify your AI systems, identify compliance gaps, and generate audit-ready documentation. Every tool runs entirely in your browser with zero data collection.

2–5 min each · No login required · Browser-only processing

Explore by topic

🔍

Start Here — Classify Your AI Systems

Determine whether the EU AI Act applies to you, your role under the Act, and your risk classification.

📋

Deep Dive — Specific Obligations

For organizations that already know their classification. Analyze exemptions, role changes, and generate operational compliance records.

🛡️
2 min

Article 6(3) Exemption Generator

Classified as high-risk under Annex III? This tool tests whether your system qualifies for an exemption under Article 6(3) and generates a defensible rationale document.

Check Exemptions →
🔄
2 min

Accidental Provider Classifier

Modified a high-risk AI system? Under Article 25, deployers who substantially alter an AI system become providers with full provider obligations. Audit your exposure.

Audit Modifications →
📝
2 min

Human Oversight Log

Article 14 requires documented human oversight. This tool generates an immutable intervention record when an operator concurs, overrides, or escalates an AI recommendation.

Log a Decision →
📋
3 min

Article 26 Operations Scorer

Grade your deployer operational readiness under Article 26. Diagnose execution gaps in human oversight, incident reporting, data governance, and FRIA completion.

Score Operations →
⚖️
3 min

Local FRIA Generator

Article 27 requires deployers of high-risk AI to complete a Fundamental Rights Impact Assessment before deployment. Generate a structured FRIA document locally.

Generate FRIA →
📥
2 min

Input Data Validator

Article 26(4) requires deployers to ensure input data is relevant and sufficiently representative. Validate your data governance practices against deployer obligations.

Validate Inputs →
🚨
3 min

Automation Complacency Assessor

Are your operators rubber-stamping AI outputs? Assess automation complacency risk against Article 14 human oversight requirements and generate a remediation plan.

Assess Complacency →
⚖️
4 min

Art 6(3) Exemption Self-Assessment

Generate a defensible self-assessment record for Article 6(3) exemption claims. Documents the material influence analysis required to justify non-high-risk classification.

Evaluate Influence →
💬
3 min

Article 50 Transparency Validator

Validate your AI system against Article 50 multi-layered transparency obligations. Covers chatbot disclosure, deepfake labelling, emotion recognition notice, and synthetic content marking.

Validate Transparency →
NEW DEPLOYER

Deployer Obligation Self-Assessment

Your vendor's compliance does not make you compliant. Map your specific deployer duties under Articles 26, 29, 50, and FRIA requirements. Five diagnostic questions with scored gap analysis.

Assess Deployer Duties →
NEW ARTICLE 50

AI Content Marking Compliance Checker

Check your content pipeline against Article 50 Code of Practice Draft 2. Assess metadata layer, watermarking layer, labelling UI, and detection capabilities. Deadline: August 2026 (not delayed by Omnibus).

Check Content Marking →
🛠️

Governance & Risk — Build Your Program

For CISOs, DPOs, and compliance officers building an AI governance program. Assess framework gaps, discover shadow AI, and vet vendors.

📊
3 min

ISO/NIST Gap Analyzer

Already certified in ISO 42001, ISO 27001, or using NIST AI RMF? Identify the specific gaps between your existing framework and EU AI Act requirements.

Analyze Gaps →
👁️
3 min

Shadow AI Discovery Protocol

Discover unauthorized AI tools being used across departments. Assess data exposure risk (public, internal IP, PII) and generate a department-level AI asset declaration.

Start Discovery →
🔎
3 min

AI Vendor Risk Screener

Vet AI vendor Data Processing Agreements before procurement. Screens for training data opt-out, data residency, retention periods, and sub-processor transparency.

Screen a Vendor →
🤖
3 min

Agentic AI Bounds Definer

Define autonomy boundaries for agentic AI systems. Map decision authority levels, escalation triggers, and human override requirements against EU AI Act obligations.

Define Bounds →
🧹
3 min

RAG Data Hygiene Screener

Screen your Retrieval-Augmented Generation pipeline for data governance risks. Assess source provenance, PII exposure, staleness, and Article 10 compliance gaps.

Screen RAG Pipeline →
🧪
4 min

Bias Testing Safe Harbor Protocol

Structure your bias testing under Article 4a safe harbor provisions. Document methodology, protected attributes, retention controls, and DPA coordination for compliant bias audits.

Design Protocol →
🎓
3 min

AI Literacy Training Planner

Generate a role-based AI literacy training plan for Article 4 compliance. Map who needs what training, at what depth, by when, and build the evidence checklist auditors expect.

Plan Training →
🏭

Sector-Specific — Annex III High-Risk

Industry-specific validators for Annex III high-risk use-case areas and Annex I product safety systems. Assess whether your sector AI systems trigger specific compliance obligations.

🎓

Learning & Training

Longer-form interactive training and decision simulations. Practice EU AI Act governance choices and build team competence with exportable training records.

Need More Than Tools?

These tools identify your obligations. Our compliance toolkits give you the implementation framework — structured assessments, NIST AI RMF crosswalks, vendor tracking templates, and audit-ready documentation.

All tools are for educational and informational purposes only. Not legal advice. Results should be reviewed by qualified legal counsel.