Scope & Applicability
Does the EU AI Act apply to your organisation and your AI use cases?
The EU AI Act (Regulation 2024/1689) is the first comprehensive legal framework regulating artificial intelligence globally. It was adopted in May 2024 and entered into force on August 1, 2024.
The legislation establishes a strict risk-based approach. It classifies AI systems into four operational tiers:
- Prohibited Risk: Specific applications banned outright due to unacceptable harm potential.
- High-Risk: Systems facing heavy compliance, documentation, and oversight obligations before market entry.
- Limited Risk: Systems subject to specific user transparency and labelling requirements.
- Minimal Risk: The vast majority of applications facing no specific regulatory burdens.
This regulation applies across the entire AI value chain. It enforces binding obligations on providers, deployers, importers, distributors, and product manufacturers. Use our 12-question Compliance Checker to determine your obligations, or read the complete compliance guide.
Yes. Article 2 establishes a strict extraterritorial scope, similar to the GDPR.
The regulation applies to you regardless of where your company is incorporated. It captures providers placing an AI system on the EU market, deployers located in the EU, and any provider or deployer outside the EU whose AI system's output is used within the EU.
Geography is not a firewall. For example, a US company using AI to score EU citizens for credit decisions falls entirely in scope. A Singapore startup whose recruitment AI filters candidates for EU-based employers also falls in scope.
Check if the AI Act applies to you with our applicability assessment tool.
Article 3(1) defines an AI system as a machine-based system designed to operate with varying levels of autonomy. These systems may exhibit adaptiveness after deployment. They infer how to generate outputs like predictions, content, recommendations, or decisions from the input they receive.
This definition is deliberately broad to future-proof the legislation. Machine learning models, neural networks, LLMs, computer vision systems, and recommendation engines clearly fall within its scope.
Traditional rule-based software with fixed logic generally falls outside the definition. The primary regulatory friction lies with systems utilising statistical methods but possessing minimal autonomy. The European Commission published official classification guidelines in February 2025 to address these specific borderline cases.
Not sure where your system falls? Take our quick classification quiz.
All chatbots must disclose they are AI under Article 50 transparency obligations, enforceable from August 2, 2026. If the chatbot generates synthetic content, the output must be machine-detectable.
If the chatbot makes decisions affecting individuals — claims handling, financial advice, candidate screening — it may be classified as high-risk under Annex III. A general customer support chatbot is limited-risk. A chatbot that screens job applicants is high-risk.
Check your chatbot's transparency duties with our Article 50 Transparency Validator or the AI Content Marking Checker.
Yes. The EU AI Act applies based on what your AI does, not your company size. A 10-person startup deploying AI for credit scoring has the same high-risk obligations as a multinational bank.
Article 55 provides SME-specific support: reduced conformity assessment fees, priority sandbox access, and dedicated guidance channels. But the obligations themselves do not shrink. Fines are proportionate — SMEs pay the lower of the absolute cap or the turnover percentage.
Start with our Compliance Checker to understand which obligations apply to your specific AI use case.
Yes. The Act is not limited to commercial products. If you deploy an AI system internally under your authority and it affects people in the EU — especially in Annex III areas like HR, credit assessment, or access to services — deployer obligations apply.
The AI Act Service Desk confirmed that internal use of a GPAI model can count as placing on the market where it is essential to services offered to third parties.
Use the Applicability Checker to assess whether your internal tools trigger obligations, or run the full Compliance Checker.
Roles & Supply Chain
Provider, deployer, importer, accidental provider — which role is yours?
Yes, if you are a non-EU provider or deployer of a high-risk AI system. Article 22 requires you to designate a legal or natural person physically established within the EU before placing the system on the market or putting it into service.
The authorized representative acts as your official liaison for all regulatory inquiries, market surveillance authority audits, and enforcement proceedings. They must have a written mandate granting sufficient authority to cooperate with national authorities on your behalf.
This requirement mirrors the GDPR's Article 27 representative obligation. If your organisation already maintains a GDPR representative in the EU, that same entity may serve as your AI Act representative, provided their mandate is formally extended to cover AI Act obligations.
Failure to appoint an authorized representative when required can block market access entirely and trigger enforcement action independently of any other violation. Use our Compliance Checker to determine your obligations.
A provider develops or commissions an AI system and places it on the market under their own name or trademark. A deployer uses an AI system under their professional authority.
This distinction dictates your legal liability. Providers carry the heaviest regulatory burdens, including conformity assessments, technical documentation, CE marking, and post-market monitoring. Deployers face separate operational obligations, such as executing fundamental rights impact assessments, enforcing human oversight, and managing incident reporting.
There is a critical classification trap. If you take a third-party AI system and substantially modify it, or place your brand name on it, you immediately become the provider under Articles 3 and 25. You automatically inherit all provider obligations and liabilities.
Use the Accidental Provider Classifier to check whether you've inadvertently become a provider.
Partially. Free and open-source AI components are generally exempt from most provider obligations. This applies provided they are not used as part of a high-risk AI system, a prohibited practice, or a system subject to transparency obligations.
The exemption does not apply to open-source GPAI models. These must still comply with Chapter V requirements unless released under a permissive licence meeting the specific criteria outlined in Article 53.
Deployers of open-source AI retain all their obligations regardless of the source code licence. Using an open-source model does not remove deployer duties under Article 26.
Assess your open-source system with the Compliance Checker.
Under Article 25, if you substantially modify a high-risk AI system, rebrand it, or change its intended purpose, you legally become the provider — inheriting all provider obligations including conformity assessment, CE marking, and full technical documentation.
This applies even if you started as a deployer or distributor. The threshold is any change not foreseen in the original conformity assessment that affects compliance or changes the intended purpose.
This is one of the most underestimated risks in the EU AI Act. Many organisations fine-tune vendor models, adapt outputs, or wrap third-party APIs in ways that cross the modification threshold without realising it.
Check your exposure with the Accidental Provider Classifier.
No. Your vendor's compliance covers their provider obligations. It does not satisfy your deployer obligations under Articles 26-27.
You still must: implement human oversight as specified by the provider (Article 14), monitor system performance in real-world conditions, report serious incidents, ensure input data quality (Article 26(4)), conduct a Fundamental Rights Impact Assessment if required (Article 27), and maintain AI literacy (Article 4).
Vendor CE marking is necessary but not sufficient for your own compliance. Think of it this way: the manufacturer certifying a car doesn't make the driver compliant with traffic laws.
Run the Deployer Self-Assessment to identify your specific obligations.
Usually yes. Article 3(4) defines a deployer as a person using an AI system under its authority, except for personal non-professional use. If employees use ChatGPT, Copilot, or similar tools in business workflows, the organisation is typically a deployer.
At minimum, the organisation must address Article 4 AI literacy obligations. Additional deployer obligations arise if the use case is high-risk (e.g., using AI to make hiring decisions) or triggers Article 50 transparency duties (e.g., customer-facing chatbots).
The larger risk is often invisible: employees adopting AI tools without organisational approval. This is shadow AI, and it creates regulatory exposure the organisation may not even know about.
Start with the Deployer Self-Assessment or run the Shadow AI Discovery Protocol to identify unauthorised AI usage.
Risk Classification
How AI systems are classified by risk level and what triggers high-risk.
Article 5 strictly prohibits eight specific AI practices as of February 2, 2025. There is no compliance pathway or mitigation strategy for these applications. The only legal option is immediate cessation:
- Subliminal or manipulative techniques causing significant harm.
- Exploitation of vulnerable groups based on age, disability, or socioeconomic circumstances.
- Social scoring by public authorities.
- Predictive policing based solely on profiling.
- Untargeted facial recognition database scraping.
- Emotion recognition in workplaces and educational institutions.
- Biometric categorisation that infers sensitive attributes like race or religion.
- Real-time remote biometric identification in public spaces (barring narrow law enforcement exceptions).
Read our detailed breakdown of all 9 prohibited practices, or use the Biometric Identity Validator to check a specific system.
High-risk classification is triggered through two distinct pathways under Article 6.
First, it applies to AI systems used as safety components in products already governed by EU harmonised legislation. These categories are listed in Annex I and include medical devices, machinery and toys, and vehicles and aviation systems.
Second, it captures AI systems deployed in specific use cases listed in Annex III. These cover eight critical areas: biometrics, critical infrastructure, education and vocational training, employment and worker management, essential private and public services (credit scoring, emergency response), law enforcement, migration, asylum, and border control, and administration of justice and democratic processes.
Review our complete Annex III checklist or use the Compliance Checker to determine your specific risk exposure.
Article 6(3) provides a highly specific and narrow exception. A provider may argue that an Annex III system does not pose a significant risk of harm to health, safety, or fundamental rights. To qualify, the system must only perform one of the following functions:
- Execute a narrow procedural task.
- Improve the result of a previously completed human activity.
- Detect patterns in data without replacing human assessment.
- Perform a preparatory task to an assessment relevant for the intended purpose.
This exception is not a blanket opt-out. It requires rigorous documented justification. Organisations must communicate this assessment to the relevant market surveillance authority before placing the system on the market. If the authority rejects the assessment, the organisation must comply with all high-risk mandates immediately.
Test your exemption eligibility with the Article 6 Exemption Framework or the Material Influence Evaluator.
Limited-risk systems carry transparency obligations exclusively, governed by Article 50. Organisations must disclose when users interact directly with AI chatbots. Any system generating synthetic content or deepfakes must clearly label the outputs as AI-generated. Emotion recognition or biometric categorisation systems that avoid the prohibited list must still actively inform users they are being analysed.
Minimal-risk systems face no specific regulatory burdens under the AI Act. The European Commission estimates that approximately 85% of active AI systems fall into this tier. Common examples include spam filters, recommendation engines, video game AI, and basic search algorithms.
However, deployers of all AI systems must still fulfil the corporate AI literacy obligations mandated by Article 4 and maintain compliance with existing frameworks like the GDPR. Take the 2-minute risk quiz to classify your system.
Almost certainly yes. Annex III, Area 4 explicitly covers AI systems utilised for employment processes. This captures the entire employee lifecycle, including:
- Recruitment and candidate selection (CV screening, interview analysis).
- Decisions affecting employment terms (promotions, terminations, task allocation).
- Monitoring and evaluating worker performance or behaviour.
If your organisation deploys any AI-powered HR technology, it is subject to this regulation. This represents the single most common trigger for Annex III classification among mid-market companies.
The resulting operational obligations are severe. They mandate formal risk management (Article 9), strict data governance (Article 10), technical documentation (Article 11), and mandatory human oversight (Article 14).
Validate your specific HR AI tool with the HR AI Validator.
Yes. Annex III Area 5a explicitly classifies AI systems used to evaluate the creditworthiness of natural persons as high-risk. This includes automated credit scoring, AI loan approval, credit limit assignment, and affordability checks.
Fraud detection is classified separately — it is not automatically high-risk unless it functionally denies access to financial services. The distinction matters because high-risk deployers must conduct a Fundamental Rights Impact Assessment under Article 27.
Use the Fraud vs. Credit Scoring Delimiter to determine where your system falls on this boundary.
Yes, with limited exceptions. Article 5(1)(f) prohibits AI systems that infer emotions in workplace and educational settings — enforceable since February 2, 2025.
AI proctoring tools that analyse facial expressions, detect "stress indicators," or evaluate emotional states during exams may violate this prohibition. The boundary between "detecting prohibited behaviour" (permitted under Annex III Area 3) and "emotion recognition" (prohibited under Article 5) is a live regulatory question that national authorities will need to clarify through enforcement practice.
Validate your system with the EdTech Assessment Validator or the Biometric Identity Validator.
Yes, when the AI serves as a safety component. Under Annex I, AI systems embedded in products regulated by existing EU harmonised legislation — including medical devices under MDR 2017/745 and in vitro diagnostics under IVDR 2017/746 — are classified as high-risk if the product requires third-party conformity assessment under its sectoral legislation.
Annex I obligations become enforceable August 2, 2027. This later deadline reflects the additional complexity of aligning AI Act requirements with existing sectoral product safety frameworks.
Assess your safety component with the Safety Component Validator.
Provider Obligations
What providers of high-risk AI systems must do.
Providers of high-risk AI systems must systematically implement the technical requirements outlined in Articles 8 through 15:
- Risk Management System (Article 9): Continuous identification and mitigation of risks throughout the system lifecycle.
- Data Governance (Article 10): Quality criteria for training, validation, and testing datasets.
- Technical Documentation (Article 11 + Annex IV): Comprehensive records of design, development, and testing.
- Record-Keeping (Article 12): Automatic logging of system events for traceability.
- Transparency (Article 13): Clear instructions for use provided to deployers.
- Human Oversight (Article 14): Design features enabling effective human supervision.
- Accuracy, Robustness, Cybersecurity (Article 15): Performance standards throughout the lifecycle.
A quality management system under Article 17 ties these requirements together. Grade your operational readiness with the Operations Scorer.
A conformity assessment is the mandatory verification process proving a high-risk AI system meets all regulatory requirements before market entry.
Most Annex III systems use an internal self-assessment pathway under Annex VI. However, biometric identification systems used for law enforcement and AI deployed in critical infrastructure management require third-party assessment by an EU-designated notified body under Annex VII.
Successful completion results in CE marking and an EU declaration of conformity. The assessment must be repeated whenever the system undergoes substantial modification or changes to its intended purpose.
Check whether your product needs third-party assessment with the Safety Component Validator.
Yes. Article 49 requires providers and, in certain cases, deployers of high-risk AI systems to register the system in the official EU database before placing it on the market or putting it into service.
The registration must include system identification, intended purpose, risk classification rationale, and contact details. Public-sector deployers of high-risk AI must also register their specific use cases.
This is a public-facing database. Incomplete or missing registration is a standalone compliance violation that can be identified by any market surveillance authority. Use the Compliance Checker to determine your registration requirement.
Annex IV outlines the strict minimum technical documentation requirements for high-risk AI systems. The documentation must contain:
- A general system description including intended purpose and foreseeable misuse.
- Detailed technical design and development methodology.
- Data governance and training data specifications — sources, size, labels, known gaps.
- Risk management documentation covering identified risks and mitigation measures.
- Accuracy, robustness, and cybersecurity test results with metrics.
- Human oversight provisions and instructions for use.
This documentation must be maintained and updated throughout the system's lifecycle. Gap-check your current documentation against ISO 42001 and NIST requirements with the Framework Gap Analyzer.
Deployer Obligations
Operational duties for organisations using high-risk AI.
Deployers of high-risk AI systems carry a distinct set of operational obligations that exist independently of what the provider has done. Even if your vendor claims full EU AI Act compliance, these responsibilities remain yours under Articles 26-27:
- Implement human oversight measures as specified by the provider (Article 14).
- Monitor system performance in real-world conditions.
- Ensure input data is relevant and sufficiently representative (Article 26(4)).
- Report serious incidents to both the provider and relevant authorities.
- Conduct a Fundamental Rights Impact Assessment where required (Article 27).
- Suspend use upon discovering non-compliance and notify the provider.
- Maintain AI literacy across the workforce (Article 4).
Run the Deployer Self-Assessment to identify which obligations apply to your specific use case.
A FRIA is a mandatory pre-deployment assessment under Article 27 for certain deployers of high-risk AI systems. It is required for:
- Public authorities and EU institutions.
- Private entities providing public services.
- Deployers of credit scoring or insurance pricing AI affecting natural persons.
The FRIA evaluates the impact on specific affected individuals in the specific deployment context — it is not a generic risk assessment. It must assess risks to non-discrimination, access to services, and human dignity. Results must be submitted to the relevant market surveillance authority.
Build your FRIA with the FRIA Generator.
At least six months under Article 26(5), to the extent the deployer controls the logs.
Sector-specific EU or national law may require longer. Financial services regulations (PSD2, MiFID II), healthcare regulations, and employment law in certain Member States can impose three-to-seven-year retention periods. Apply the strictest applicable requirement and document your retention rationale as part of your evidence pack.
Track your logging requirements with the Human Oversight Log.
Article 3(49) defines a serious incident as one directly or indirectly leading to: death or serious harm to a person's health, serious and irreversible disruption of critical infrastructure management, infringement of fundamental rights obligations, or serious harm to property or the environment.
For high-risk AI systems, both providers and deployers have reporting duties. Providers must report to the relevant market surveillance authority. Deployers must inform the provider and, in some cases, the authority directly. Timeliness matters — delays in reporting can themselves constitute a violation.
Document incidents with the Human Oversight Log.
Transparency, GPAI & AI-Generated Content
Article 50 transparency, general-purpose AI models, and content labelling.
A GPAI model is an AI architecture trained on massive datasets using self-supervision at scale. These models are inherently capable of executing a wide variety of distinct tasks. Major LLMs like GPT, Claude, and Gemini fall squarely into this category.
Chapter V dictates the regulatory framework for GPAI providers, which became enforceable on August 2, 2025. Providers must maintain granular technical documentation, provide system details to downstream deployers, respect EU copyright law, and publish detailed summaries of their training data.
Models classified as presenting systemic risk face escalated obligations. This classification is triggered by computational training thresholds or direct Commission designation. Systemic models require aggressive adversarial testing, continuous incident monitoring, and detailed energy consumption reporting.
If you use GPAI models in your pipeline, check data hygiene with the RAG Pipeline Screener.
Article 50 applies from August 2, 2026. From that date:
- Deployers must disclose AI interaction to users (chatbot disclosure).
- Providers must ensure AI-generated or manipulated content (deepfakes) is machine-detectable.
- Deployers must inform people when emotion recognition or biometric categorisation systems are used on them.
The Article 50 Code of Practice is a soft-law workstream intended to support implementation, but it does not itself create new binding legal requirements. Article 99 sets maximum fine tiers for certain breaches, including up to 7.5 million euros or 1% of turnover in some cases.
Check your transparency duties with the Article 50 Validator or the Content Marking Checker.
Yes, but not as a separate legal category. The AI Act Service Desk confirmed that AI agents are covered through existing definitions of AI systems and, often, general-purpose AI models.
If agents interact with people or generate content, Article 50 transparency rules apply. If they operate in high-risk areas — recruitment, credit assessment, critical infrastructure — Chapter III obligations apply from August 2, 2026. The key regulatory question is what the agent does, not what it is called.
Map your agentic AI boundaries with the Agentic AI Bounds Definer.
There is no mandatory AI literacy certificate in the Act. The safer evidentiary model is an internal record showing:
- Which roles were trained and when.
- What materials were used.
- How content was tailored to the specific AI use case.
- How the programme stays current as AI tools and regulations evolve.
Generic "AI awareness" webinars are insufficient — Article 4 requires literacy proportionate to the technical knowledge, experience, and context of the persons dealing with AI systems. A developer working on a high-risk system needs deeper literacy than a sales team using a chatbot.
Build your training plan with the AI Literacy Training Planner.
Timelines & Digital Omnibus
When obligations kick in and whether deadlines may shift.
The enforcement timeline is structured in distinct phases:
- February 2, 2025: Prohibited practices (Article 5) and AI literacy (Article 4) became enforceable.
- August 2, 2025: General-Purpose AI model obligations (Chapter V) took effect.
- August 2, 2026: All remaining obligations become enforceable — high-risk system requirements, deployer duties, Article 50 transparency obligations, and national authority designation.
- August 2, 2027: Annex I obligations for AI embedded in regulated products (medical devices, machinery, vehicles) take effect.
The Digital Omnibus proposal may extend the Annex III high-risk deadline. See our full Omnibus analysis for the latest status.
Article 4 places a mandatory obligation on both providers and deployers to ensure sufficient AI literacy across their workforce. This directive has been legally enforceable since February 2, 2025.
Organisations must ensure that staff and other persons dealing with AI systems on their behalf have a level of AI literacy proportionate to the context of use, taking into account their technical knowledge, experience, education, and training.
This is not a box-ticking exercise. Regulators will look for evidence that literacy programmes are tailored to specific roles and AI use cases, not generic awareness sessions. The obligation applies even if you only deploy minimal-risk AI systems.
Build a role-specific programme with the AI Literacy Training Planner.
Likely yes, but the extension is not yet adopted law.
The Digital Omnibus proposal would extend the Annex III high-risk deadline from August 2026 to December 2, 2027. The Commission originally proposed a conditional mechanism tied to standards availability, but both the European Parliament (Draft Report PE782.530, February 2026) and the Council (first compromise text, January 2026) now favour fixed, binding deadline extensions. The IMCO-LIBE committee vote is expected around March 2026, with plenary vote targeted for late March and trilogue negotiations in Spring/Summer 2026.
The deadlines for prohibited practices, AI literacy, GPAI rules, and Article 50 transparency obligations remain entirely unaffected by this proposal.
Read our full Omnibus analysis for detailed tracking.
Enforcement infrastructure is already operational in several Member States.
Finland became the first Member State with fully operational AI Act enforcement on 1 January 2026, establishing 10 market surveillance authorities coordinated by Traficom, plus a Sanctions Board with fining power exceeding 100,000 euros.
Ireland published the General Scheme of its Regulation of Artificial Intelligence Bill 2026 on 4 February 2026, establishing the AI Office of Ireland as an independent statutory body with 15 designated sectoral competent authorities. The AI Office must be operational by 1 August 2026.
Spain has been the most proactive on practical guidance. AESIA published 16 comprehensive compliance guidance documents in December 2025, now available in English, developed through Spain's operational AI regulatory sandbox with 12 selected projects.
Track progress country by country with the National Enforcement Tracker.
Penalties & Enforcement
What happens if you don't comply.
The regulation enforces a three-tiered penalty structure based on violation severity:
- Prohibited Practices: Fines up to 35 million euros or 7% of worldwide annual turnover, whichever is higher.
- High-Risk Violations: Fines up to 15 million euros or 3% of worldwide turnover.
- Misleading Regulators: Fines up to 7.5 million euros or 1% of worldwide turnover.
To ensure proportionality for SMEs, the penalty cap utilises whichever figure is lower. Organisations must also monitor national implementing legislation, as individual Member States may impose additional criminal liabilities for severe infractions.
Track how enforcement is shaping up with the National Enforcement Tracker.
Enforcement is highly decentralised and operates across multiple jurisdictional levels.
Each EU Member State is required to designate national market surveillance authorities. These bodies are responsible for direct monitoring, auditing, and enforcement within their specific territories. By August 2026, each Member State must also launch at least one AI regulatory sandbox.
At the EU level, the AI Office oversees the broader implementation of the Act, managing cross-border disputes and directly enforcing the rules governing General-Purpose AI models. The European Data Protection Supervisor exclusively handles enforcement for AI deployed by EU institutions. An AI Board consisting of Member State representatives coordinates strategy across the national authorities.
See which authorities are active in the National Enforcement Tracker.
Historical enforcement patterns from the GDPR and product safety directives reveal a predictable audit strategy. Early enforcement will target three specific vectors:
- Prohibited practices: Actively enforceable now, maximum penalties, structurally the easiest violations to prove.
- High-risk systems in regulated sectors: Financial services, healthcare, and employment — existing sectoral regulators already have inspection mechanisms and will integrate AI audits into standard procedures.
- Documentation failures: Requesting technical documentation and receiving an incomplete response is the fastest path to a non-compliance finding for any auditor.
Assess your readiness for an audit with the Deployer Readiness Assessment.
Cross-Regulation & Frameworks
EU AI Act + GDPR, ISO 42001, NIST, DORA, NIS2, SOC 2.
The two frameworks are legally distinct and operate in parallel. Achieving GDPR compliance does not equate to AI Act compliance.
The GDPR dictates personal data processing rules, covering lawful bases, data subject rights, and cross-border transfers. The EU AI Act governs product safety, focusing on risk management, system design, accuracy, and human oversight.
An AI system processing personal data must satisfy both frameworks simultaneously. For example, an AI screening tool must process applicant data lawfully under the GDPR while meeting the strict bias and documentation requirements of the AI Act. Violations of either framework trigger independent enforcement actions and separate financial penalties.
These frameworks tackle AI risk from differing structural perspectives. The EU AI Act is a binding legislative mandate with severe financial penalties. ISO/IEC 42001 is a certifiable management system standard. The NIST AI RMF is a voluntary operational guide.
There is considerable operational convergence. Approximately 70% to 80% of the baseline risk management and oversight requirements overlap. Organisations holding ISO 42001 certification possess a strong foundational advantage.
However, the frameworks are not identical. ISO 42001 leaves critical gaps regarding the prescriptive data governance mandates of Article 10, the rigid documentation requirements of Annex IV, and the absolute prohibitions outlined in Article 5.
Map the overlaps and gaps with the ISO-NIST Gap Analyzer or read the complete framework crosswalk.
They are different assessments for different risks, but yes — you may need both.
A DPIA (GDPR Article 35) assesses data protection risks from personal data processing. A FRIA (AI Act Article 27) assesses broader fundamental rights impacts from the AI system itself. If your high-risk AI processes personal data of EU residents AND you are required to conduct a FRIA, both assessments apply.
Practically, conduct them as a combined assessment with separate sections for data protection and fundamental rights. This avoids duplicate work while satisfying both regulatory frameworks.
Build your combined assessment with the FRIA Generator.
They are complementary, not overlapping.
DORA (effective January 17, 2025) requires financial entities to manage ICT third-party risk — AI vendors are ICT providers, so vendor due diligence must satisfy both DORA and AI Act requirements simultaneously. NIS2 strengthens cybersecurity for essential entities — AI systems in critical infrastructure may trigger both NIS2 incident reporting and Annex III high-risk classification.
The practical approach is to build one governance programme to the highest applicable standard rather than maintaining parallel compliance workstreams. Organisations in financial services should treat AI vendor assessment as a single combined DORA + AI Act exercise.
No. SOC 2 demonstrates organisational security controls but does not address EU AI Act-specific obligations: fundamental rights impact assessments (Article 27), bias testing requirements, AI-specific transparency duties (Article 50), or the risk classification and conformity assessment framework.
SOC 2 controls can contribute to the quality management system required by Article 17, particularly around access control, change management, and incident response. But they do not replace the AI Act's distinct compliance architecture.
Identify your specific gaps with the Framework Gap Analyzer.
Shadow AI, Costs & Getting Started
Practical first steps, shadow AI governance, and compliance costs.
Article 55 establishes critical structural accommodations to prevent the regulation from destroying mid-market innovation.
The Commission is actively developing simplified technical documentation templates that national authorities are legally required to accept. SMEs receive priority, free-of-charge access to national AI regulatory sandboxes. Conformity assessment fees are scaled proportionately to company size and market demand.
SMEs also benefit from dedicated compliance communication channels and guaranteed representation in standard-setting forums. Crucially, their financial penalties are calculated using the lower threshold between fixed amounts and turnover percentages. The proposed Digital Omnibus act may further expand these specific SME accommodations.
Start exploring your obligations with our library of 25+ free compliance tools.
AI regulatory sandboxes, defined in Article 57, are controlled testing environments administered by national competent authorities.
These sandboxes allow providers to develop, train, validate, and test innovative AI systems under direct regulatory supervision prior to market launch. Every EU Member State is mandated to establish at least one national sandbox by August 2, 2026.
Participation remains voluntary, but SMEs are granted priority admission. These environments provide organisations with binding regulatory guidance on complex compliance interpretations. Civil-liability questions remain separate.
Learn more in our sandbox guide.
Yes. Spain's AESIA has published the most detailed practical guidance available from any national authority. Their 16 documents cover risk management systems, data governance, technical documentation, human oversight, transparency requirements, conformity assessments, and post-market monitoring for high-risk AI systems.
These guides are now available in English and represent the current gold standard for practical compliance implementation. They were developed through real-world experience with 12 companies in Spain's operational AI regulatory sandbox.
The European Commission has also announced a pipeline of forthcoming guidelines covering transparency requirements (Article 50), serious incident reporting, fundamental rights impact assessment templates, SME simplified quality management, and AI Act-GDPR interplay. None have been published yet as of March 2026, but they are expected throughout the year.
Track all national guidance with the country-by-country Enforcement Tracker.
Shadow AI is unauthorised AI tool usage by employees — ChatGPT, Copilot, Midjourney, AI browser extensions — without organisational approval or governance. It creates three distinct risks:
- Data leakage: Employees paste confidential data into AI interfaces without understanding where that data goes or how it is retained.
- Regulatory exposure: If the AI use case falls under Annex III, the organisation has unmet deployer obligations it does not even know about.
- Liability: The organisation is responsible for employees' AI use under its authority, regardless of whether that use was authorised.
You cannot comply with the EU AI Act if you do not know what AI your organisation is using. Discovery is the mandatory first step.
Run the Shadow AI Discovery Protocol to identify unauthorised AI usage across your organisation.
For a small deployer (10-50 employees) doing it in-house with templates: approximately €0-300 in direct costs plus 10-15 working days spread over 2-3 months. With guided external support: €1,000-3,000 plus team time. Enterprise AI governance platforms charge €15K-50K/year — unnecessary for most SMEs.
Start with free tools (euaicompass.com tools), add templates when gaps become clear, and engage consultants only for specific high-stakes decisions like conformity assessment for high-risk systems or FRIA for public-sector deployments.
For structured compliance toolkits with pre-built templates, assessment trackers, and documentation frameworks, see the E1 assessment toolkit ($299).
Organisations must execute five immediate tactical priorities:
- Conduct Asset Discovery: Map every AI system developed, deployed, or procured, prioritising shadow AI hidden within business units. Use the Shadow AI Discovery Protocol.
- Classify the Inventory: Route every identified system through the official risk-based framework using the Compliance Checker.
- Audit Article 5: Terminate any systems operating prohibited practices immediately — these are actively enforceable today.
- Initiate Gap Analysis: Begin building the required risk management, data governance, and human oversight infrastructure for Annex III systems.
- Centralise Documentation: Ensure all operational controls are formally documented in an audit-ready compliance ledger.
For a comprehensive step-by-step walkthrough, read the complete compliance guide. For structured toolkits with templates and trackers, see Move78 compliance toolkits.
Find out where you stand under the EU AI Act
25+ free compliance tools. No login. No data collected.
Disclaimer: This FAQ is for educational purposes only and does not constitute legal advice. The EU AI Act text is available at eur-lex.europa.eu. Consult qualified legal counsel for binding compliance decisions. Published by Move78 International Limited.