EU AI Act — Frequently Asked Questions

25 questions answered clearly for compliance officers, CTOs, DPOs, and legal counsel at SMEs. Based on the official EU AI Act text (Regulation 2024/1689), EU Commission guidance, and the AI Act Service Desk FAQ.

Last updated: March 2026 · Not sure where to start? Take the 12-question Compliance Checker

EU AI Act scope and applicability compass icon for global AI regulation
Basics & Scope

Who does the EU AI Act apply to and what does it cover?

The EU AI Act (Regulation 2024/1689) is the world's first comprehensive legal framework regulating artificial intelligence. Adopted in May 2024 and entered into force on 1 August 2024, it establishes a risk-based approach that classifies AI systems into four tiers: prohibited (banned outright), high-risk (heavy compliance obligations), limited risk (transparency requirements), and minimal risk (no specific obligations). The regulation applies across the entire AI value chain — providers, deployers, importers, distributors, and product manufacturers.

Yes. Article 2 establishes extraterritorial scope, similar to GDPR. If you are a provider placing an AI system on the EU market, a deployer located in the EU, or a provider/deployer located outside the EU whose AI system's output is used within the EU, the regulation applies to you — regardless of where your company is incorporated. A US company whose AI system scores EU citizens for credit decisions is in scope. A Singapore startup whose recruitment AI filters candidates for EU-based employers is in scope. Geography is not a firewall. Use our 2-minute Quick Quiz to check if your specific situation triggers EU nexus.

Article 3(1) defines an AI system as "a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments." This is deliberately broad. Traditional rule-based software with fixed logic generally falls outside the definition. Machine learning models, neural networks, LLMs, computer vision systems, and recommendation engines clearly fall within it. The grey area lies with systems that use statistical methods but minimal autonomy — the European Commission published classification guidelines in February 2026 to address borderline cases.

A provider develops or commissions an AI system and places it on the market or puts it into service under their own name or trademark. A deployer uses an AI system under their authority (except for personal non-professional use). The distinction matters because providers carry heavier obligations — conformity assessments, technical documentation, CE marking, post-market monitoring. Deployers have separate obligations including fundamental rights impact assessments for high-risk systems, human oversight implementation, and incident reporting. Critically, if you take someone else's AI system and substantially modify it, or put your name on it, you become the provider under Articles 3 and 25 — inheriting all provider obligations.

Partially. Free and open-source AI components are generally exempt from most provider obligations, provided they are not used as part of a high-risk AI system or a system that falls under prohibited practices or transparency obligations. However, if you take an open-source model, fine-tune it, and deploy it in a high-risk context (say, recruitment screening), you become the provider of that modified system and all high-risk requirements apply. The open-source exemption is meant to protect researchers and developers sharing pre-trained models — not to create a compliance loophole for commercial deployment. GPAI model providers with open-source models still have obligations under Chapter V, though with some simplified requirements.

EU AI Act risk classification tiers from prohibited to minimal risk
Risk Classification

How AI systems are classified by risk level and what it means.

Article 5 prohibits eight AI practices since 2 February 2025: subliminal or manipulative techniques causing significant harm, exploitation of vulnerable groups (age, disability, social/economic circumstances), social scoring by public authorities, predictive policing based solely on profiling, untargeted facial recognition database scraping, emotion recognition in workplaces and educational institutions, biometric categorisation inferring sensitive attributes (race, religion, sexual orientation), and real-time remote biometric identification in public spaces (with narrow law enforcement exceptions requiring judicial authorisation). There is no compliance pathway for these — only cessation. Read our detailed breakdown of all 8 prohibited practices.

High-risk classification is triggered in two ways under Article 6. First, AI systems used as safety components in products already covered by EU harmonised legislation listed in Annex I (medical devices, machinery, toys, vehicles, aviation). Second, AI systems in specific use cases listed in Annex III across eight areas: biometrics, critical infrastructure, education, employment, essential services (credit scoring, insurance), law enforcement, migration/border control, and administration of justice. For SMEs, the most common triggers are employment AI (recruitment screening, performance analytics) and essential services AI (credit scoring, insurance pricing). See our complete Annex III checklist or take the Compliance Checker for your specific classification.

Article 6(3) provides a narrow exception. A provider can argue that an AI system listed in Annex III does not pose a significant risk of harm to health, safety, or fundamental rights if the system's output is narrow in scope, improves a previously completed human activity, is preparatory to an assessment that a human will review, or detects decision-making patterns without replacing human judgement. This exception requires documented justification and must be communicated to the relevant market surveillance authority before the system is placed on the market. If the authority disagrees, you must comply with the full high-risk requirements. This is not a blanket opt-out — it's a documented, challengeable claim.

Limited risk systems have transparency obligations only (Article 50). These include AI systems that interact directly with people (chatbots must disclose they are not human), systems generating synthetic content (deepfakes must be labelled as AI-generated), and emotion recognition or biometric categorisation systems that are not banned but must inform users they are being analysed. Minimal risk systems have no specific obligations under the AI Act. The EU Commission has estimated that approximately 85% of AI systems currently in use fall into this category — spam filters, recommendation engines, video game AI, basic search algorithms. However, all AI systems must still comply with AI literacy obligations (Article 4) and any applicable existing legislation (GDPR, product safety directives).

Almost certainly yes. Annex III, Area 4 covers AI systems used for recruitment and candidate selection (CV screening, interview analysis, candidate ranking), decisions affecting terms of employment (promotions, terminations, task allocation), and monitoring or evaluating worker performance and behaviour. If your organisation uses any AI-powered HR technology — from applicant tracking systems to automated performance reviews — it falls here. This is the single most common trigger for Annex III classification among SMEs. The obligations are substantial: risk management (Article 9), data governance (Article 10), technical documentation (Article 11), human oversight (Article 14), and more. Explore the EU AI Act comics for visual explanations of recruitment AI scenarios.

EU AI Act compliance deadlines calendar with checkmark for August 2026 enforcement
Compliance & Deadlines

What you need to do and by when.

The timeline is phased. Prohibited practices (Article 5) and AI literacy (Article 4): already enforceable since 2 February 2025. GPAI model rules (Chapter V): enforceable since 2 August 2025. High-risk systems in Annex III: 2 August 2026. High-risk systems in regulated products (Annex I): 2 August 2027. The Digital Omnibus proposal may conditionally extend certain Annex III deadlines to December 2027, but this is not yet adopted law. See our full deadline action plan for SMEs and explore the interactive Regulatory Timeline.

For high-risk AI systems, providers must implement all requirements under Articles 8-15: establish a risk management system covering the full AI lifecycle (Article 9), meet data governance standards for training, validation, and testing data (Article 10), prepare and maintain technical documentation per Annex IV (Article 11), implement automatic event logging (Article 12), provide clear instructions for use to deployers (Article 13), design for effective human oversight including the ability to override or interrupt the system (Article 14), and ensure appropriate levels of accuracy, robustness, and cybersecurity (Article 15). Additionally, providers must establish a quality management system (Article 17), complete conformity assessments, affix CE marking, and register in the EU database. Our EU AI Act Compliance Toolkit provides structured templates covering each of these requirements.

A conformity assessment is the process of verifying that a high-risk AI system meets all applicable requirements before it can be placed on the market or put into service. For most high-risk AI systems listed in Annex III, the provider can perform an internal conformity assessment (self-assessment). However, for certain biometric identification systems, a third-party conformity assessment by a notified body is required. For high-risk AI systems that are safety components of products covered by Annex I legislation, the conformity assessment follows the procedures already established in that sectoral legislation. Upon successful assessment, the provider issues an EU declaration of conformity and affixes the CE marking.

Annex IV specifies the minimum technical documentation requirements for high-risk AI systems. This includes a general description of the system, detailed information about the design and development process, information about system monitoring and functioning, a description of the risk management system, a description of data governance measures, detailed information about training, validation, and testing data, and records of changes made throughout the lifecycle. For SMEs and startups, simplified documentation forms are permitted under Article 62 — the European Commission is developing these templates. Documents must be kept for 10 years and be available to national authorities and notified bodies upon request.

Article 4 requires providers and deployers to take measures to ensure, to their best extent, a sufficient level of AI literacy among staff and other persons dealing with the operation and use of AI systems on their behalf. This has been enforceable since 2 February 2025. In practice, this means training employees who work with AI systems to understand what the systems can and cannot do, how to interpret outputs, how to identify when the system may be producing unreliable results, and what oversight mechanisms exist. The training should be tailored to the context of use and the technical literacy of the staff. Explore our AI Literacy flashcard modules for structured learning content covering Article 4 requirements.

EU AI Act penalties and enforcement gavel with euro fines up to 35 million
Penalties & Enforcement

Fine structures, who enforces, and what gets checked first.

Fines are structured in three tiers based on violation severity. Prohibited practice violations (Article 5): up to €35 million or 7% of worldwide annual turnover, whichever is higher. Non-compliance with high-risk system requirements or other substantive provisions: up to €15 million or 3% of worldwide turnover. Supplying incorrect, incomplete, or misleading information to national authorities or notified bodies: up to €7.5 million or 1% of worldwide turnover. For SMEs, the penalty cap is calculated using whichever figure is lower (not higher), providing proportionality. Additionally, some Member States may impose criminal liability for certain violations — organisations should monitor national implementing legislation.

Enforcement operates at multiple levels. Each EU Member State must designate national market surveillance authorities responsible for monitoring and enforcing compliance within their territory. The EU AI Office (part of the European Commission) oversees AI Act implementation at EU level, particularly for GPAI models and cross-border issues. The European Data Protection Supervisor handles enforcement for EU institutions. Providers of GPAI models face enforcement directly from the European Commission. The AI Board (composed of Member State representatives) coordinates between national authorities. By 2 August 2026, each Member State must also establish at least one AI regulatory sandbox.

Based on enforcement patterns from similar EU regulations (GDPR, product safety directives), early enforcement is likely to focus on three areas. First, prohibited practices — these are already enforceable, carry the highest penalty tier, and are the easiest for regulators to identify and prosecute. Second, high-risk systems in regulated sectors where existing sector regulators (financial services, healthcare, employment) are already active and have established inspection mechanisms. Third, documentation gaps — the fastest way for any market surveillance authority to issue a finding is to request technical documentation and receive incomplete or nonexistent records. Organisations with limited compliance budgets should prioritise in that order. Read our full enforcement preparation guide.

EU AI Act SME support measures including regulatory sandboxes and simplified documentation
SME Support & Special Topics

Sandboxes, GPAI rules, Digital Omnibus, and framework crosswalks.

Article 55 establishes several specific measures. Simplified technical documentation — the Commission is developing streamlined templates that national authorities must accept for conformity assessments. Priority access to AI regulatory sandboxes, free of charge, with simplified application procedures. Proportionate conformity assessment fees scaled to company size, development stage, and market demand. Dedicated communication channels for compliance guidance. Representation in advisory forums and standardisation processes. Penalty calculations using whichever is lower between the fixed amount and turnover percentage. Additionally, the Digital Omnibus proposal includes raising certain thresholds for SME-specific accommodations. These measures recognise that SMEs face disproportionate compliance burdens relative to their resources.

AI regulatory sandboxes (Article 57) are controlled environments established by national competent authorities where providers can develop, train, validate, and test innovative AI systems under regulatory oversight before placing them on the market. Each EU Member State must establish at least one sandbox with national coverage by 2 August 2026. Participation is voluntary but encouraged — SMEs and startups get priority access. Sandboxes provide direct guidance from regulators on compliance interpretation, and documentation from sandbox participation can be used to demonstrate compliance. Critically, providers who follow the sandbox plan in good faith are shielded from administrative fines for AI Act infringements during the sandbox period (though not from third-party liability).

A GPAI model is defined as an AI model trained with large amounts of data using self-supervision at scale, capable of performing a wide range of distinct tasks regardless of how it is placed on the market. LLMs like GPT, Claude, Gemini, and Llama are clear examples. Chapter V establishes obligations for GPAI providers, enforceable since 2 August 2025. All GPAI providers must maintain technical documentation, provide information to downstream AI system providers, comply with EU copyright law, and publish a sufficiently detailed summary of training content. GPAI models with systemic risk (determined by computational training thresholds or Commission designation) face additional obligations including adversarial testing, incident monitoring, and energy consumption reporting. Codes of Practice were due by May 2025 and are being finalised.

Possibly, but conditionally and not yet. The Digital Omnibus proposal (November 2025) includes a backstop mechanism linking high-risk compliance deadlines to the availability of harmonised European standards. If CEN/CENELEC have not finalised relevant standards, the deadline could extend to December 2027. However, the proposal must pass through the European Parliament and Council — neither adoption nor the final text is guaranteed. Prohibited practices, AI literacy, and GPAI rules are unaffected regardless. Our recommendation: plan for August 2026. If the extension materialises, you gain buffer time from a position of readiness. Read our full Digital Omnibus analysis.

The two regulations operate in parallel — compliance with one does not satisfy the other. GDPR governs personal data processing: lawful bases, data subject rights, breach notification, cross-border transfers. The AI Act governs AI system design, deployment, and oversight: risk management, documentation, human oversight, accuracy, robustness. An AI system processing personal data must comply with both. For example, a recruitment AI must meet GDPR requirements for candidate data processing AND EU AI Act requirements for high-risk system compliance. Violations of each carry separate penalties. Organisations already GDPR-compliant have a head start on AI Act data governance (Article 10) but should not assume full coverage — the AI Act adds requirements around training data quality, bias examination, and statistical properties that go beyond GDPR.

All three frameworks address AI risk management from different angles. The EU AI Act is a binding regulation with enforcement penalties. ISO/IEC 42001 is a certifiable AI management system standard. NIST AI RMF is a voluntary governance guide. There is significant overlap — approximately 70-80% of risk management, documentation, and human oversight requirements converge across the three. Organisations with existing ISO 42001 certification will have covered much of what the EU AI Act requires, with gaps primarily in data governance prescriptiveness (Article 10), Annex IV documentation granularity, and the Article 5 prohibition screening. Read our detailed three-framework crosswalk analysis.

Five actions in priority order. First, build your AI inventory — document every AI system your organisation develops, deploys, or procures, including shadow AI adopted by individual teams. Second, classify each system using the risk-based framework — take our 12-question Compliance Checker for each system. Third, screen for prohibited practices immediately — these are already enforceable with the highest penalty tier. Fourth, for any high-risk systems, begin working through Articles 8-15 requirements — risk management, data governance, documentation, human oversight. Fifth, document everything — regulators assess compliance through documentation first. If you have done the work but have not documented it, from an enforcement perspective you have not done the work. Our EU AI Act Compliance Toolkit provides structured templates for all five steps.

Find out where you stand

Use our free tools to determine your specific obligations under the EU AI Act.

Disclaimer: This FAQ is for educational purposes only and does not constitute legal advice. The EU AI Act text is available at eur-lex.europa.eu. Consult qualified legal counsel for binding compliance decisions. Published by Move78 International Limited.