On 2 August 2026, the majority of the EU Artificial Intelligence Act (Regulation 2024/1689) becomes fully enforceable. High-risk AI systems listed in Annex III must comply with all requirements under Articles 8 through 15. Market surveillance authorities across EU Member States will begin active enforcement. Penalties reach up to €35 million or 7% of worldwide annual turnover, whichever is higher.
If your organisation deploys or provides AI systems that affect EU citizens, this deadline applies to you — regardless of where your company is incorporated. Article 2 establishes extraterritorial scope: if the output of your AI system is used in the EU, you are in scope.
Not sure if you're affected? Use our free 12-question Compliance Checker to determine your classification in 5 minutes.
What Has Already Started
The August 2026 deadline is not the beginning of enforcement — it is the final major phase. Two provisions are already live. Since 2 February 2025, all eight prohibited AI practices under Article 5 have been banned. This includes social scoring, subliminal manipulation, untargeted facial recognition scraping, and emotion recognition in workplaces and educational institutions. Violations carry the maximum penalty tier. Additionally, AI literacy obligations under Article 4 have applied since the same date, requiring organisations to ensure staff involved in AI deployment have sufficient understanding of the technology and its risks.
Since 2 August 2025, rules governing General-Purpose AI (GPAI) models are fully applicable. If you provide or fine-tune a GPAI model, transparency and documentation obligations under Chapter V already apply.
The 5-Step Action Plan for SMEs

Step 1: Build your AI inventory. You cannot comply with regulations you do not understand, and you cannot govern systems you have not catalogued. Document every AI system your organisation develops, deploys, or procures. For each system, record its purpose, the data it processes, where its outputs are used, and who is affected by its decisions. Shadow AI — systems adopted by individual teams without central oversight — is the most common gap. If your organisation uses AI-powered recruitment screening, customer service chatbots, fraud detection, or credit scoring tools, these must appear in your inventory.
Step 2: Classify each system. The EU AI Act uses a risk-based framework with four tiers: prohibited (banned outright under Article 5), high-risk (subject to mandatory requirements under Articles 8-15), limited risk (transparency obligations only), and minimal risk (no specific obligations). The most consequential classification is high-risk, defined in Annex III across eight areas including employment, education, critical infrastructure, law enforcement, border control, and access to essential services. Use our 2-minute Quick Quiz for an initial screening of each system.
Step 3: Address Article 5 compliance immediately. If any system in your inventory might engage in a prohibited practice, stop deploying it. This is not a future requirement — it is already law. Common trip hazards include AI tools that infer emotional states of employees, systems that score individuals for access to social benefits based on unrelated behaviour, and recruitment tools that scrape biometric data without explicit consent.
Step 4: For high-risk systems, work through Articles 8-15. These articles define specific requirements covering risk management systems (Article 9), data governance (Article 10), technical documentation (Article 11), record-keeping (Article 12), transparency and information to deployers (Article 13), human oversight measures (Article 14), and accuracy, robustness, and cybersecurity (Article 15). This is the bulk of the compliance work and typically requires 3-6 months for an SME with a handful of AI systems.
Step 5: Document everything. Regulators assess compliance through documentation first. If you have done the work but have not documented it, from an enforcement perspective, you have not done the work. Technical documentation requirements under Article 11 and Annex IV are detailed and specific. Consider using structured compliance toolkits to ensure nothing is missed — our EU AI Act Compliance Toolkit provides portfolio-level screening, 62-question risk assessments, and documentation templates aligned to these requirements.
The Digital Omnibus Question
In November 2025, the European Commission published the Digital Omnibus proposal, which includes a provision that could conditionally push back certain high-risk system deadlines from August 2026 to as late as December 2027. This extension is contingent on the availability of harmonised technical standards — not guaranteed. The proposal is under legislative review with no confirmed adoption timeline. Our detailed analysis: Will high-risk deadlines move to 2027?
The prudent approach: plan for August 2026 as your compliance target. If the Omnibus extension materialises, you gain additional time to refine. If it does not, you are prepared. Organisations that treat the Omnibus as permission to delay will face a compressed compliance timeline with no margin for error.
What Regulators Will Check First
Early enforcement is likely to focus on three areas: prohibited practices (already enforceable, highest penalty tier, easiest to identify), high-risk systems in regulated sectors (healthcare, financial services, employment — where existing sector regulators are already active), and documentation gaps (the fastest way for a market surveillance authority to issue a finding is to request documentation and receive incomplete or nonexistent records).
SMEs with limited compliance budgets should prioritise in that order. Eliminate prohibited practice exposure first, classify and document high-risk systems second, build out the full compliance apparatus third.
Start Here
Take the 12-question Compliance Checker to determine your specific obligations. Review the regulatory timeline to understand which provisions apply at which dates. Browse the 60 EU AI Act comics for quick visual explanations of each major concept.

About the author: Abhishek G Sharma is the founder of Move78 International Limited and holds ISO 42001 Lead Auditor, CISA, CISM, CRISC, and CEH certifications. He has 20+ years of experience in cybersecurity, AI governance, and risk management.
Disclaimer: This article is for educational purposes only and does not constitute legal advice. Consult qualified legal counsel before making binding compliance decisions. Last updated: March 2026.
