35 Key Definitions

A practical glossary of 35 key terms from the EU AI Act and related AI governance frameworks. Written for CTOs, CISOs, DPOs, and compliance leads at SMEs.

By Abhishek Sharma · ISO 42001 LA, CISA, CISM, CRISC · Last updated: March 2026

RolesRiskObligationsLifecycleGovernanceEnforcementBiometrics

EU AI Act Glossary: 35 Key Definitions for Practitioners

Core Roles & Entities

The key actors defined by the EU AI Act and their regulatory responsibilities.

An AI system is a machine-based system designed to operate with varying levels of autonomy that may exhibit adaptiveness after deployment. It infers from its inputs how to generate outputs — predictions, content, recommendations, or decisions — that can influence physical or virtual environments.

Article 3(1) of the EU AI Act provides this deliberately broad definition to future-proof the legislation. Machine learning models, neural networks, LLMs, computer vision systems, and recommendation engines all fall within scope. Traditional rule-based software with fixed logic generally does not.

Not sure if your system qualifies? Run the Compliance Checker or read the complete compliance guide.

A provider is any person or organisation that develops an AI system or GPAI model, or has one developed on its behalf, and places it on the market or puts it into service under its own name or trademark.

Under Article 3(3) of the EU AI Act, providers carry the heaviest compliance obligations in the supply chain — including conformity assessment, CE marking, technical documentation (Annex IV), risk management (Article 9), and post-market monitoring.

Check your role with the Compliance Checker to determine whether you are classified as a provider.

A deployer is any person or organisation that uses an AI system under its authority in a professional context, except for purely personal non-professional use.

Article 3(4) of the EU AI Act gives deployers distinct obligations under Articles 26-27: human oversight, real-world performance monitoring, input data quality, serious incident reporting, and in some cases a Fundamental Rights Impact Assessment. Even if your vendor claims full compliance, deployer duties remain yours.

Run the Deployer Self-Assessment to identify your specific obligations.

An importer is any person or organisation established in the EU that places on the market an AI system bearing the name or trademark of a non-EU entity.

Article 3(6) of the EU AI Act requires importers to verify that the non-EU provider has completed the conformity assessment, that technical documentation is available, and that the system bears CE marking before it enters the EU market.

Use the Compliance Checker to determine whether importer obligations apply to your operations.

A distributor is any person in the supply chain, other than the provider or importer, that makes an AI system available on the EU market.

Under Article 3(7) of the EU AI Act, distributors must verify that the AI system bears CE marking and is accompanied by the required documentation. They must not make a system available if they have reason to believe it does not conform.

Use the Compliance Checker to assess your supply chain obligations.

An authorised representative is a person or organisation physically established in the EU, designated by a non-EU provider or deployer to act on their behalf for EU AI Act obligations.

Article 22 of the EU AI Act requires this appointment before a high-risk AI system can be placed on the EU market by a non-EU entity. The representative serves as the official liaison for regulatory inquiries, audits, and enforcement proceedings.

Check your representative requirements with the Applicability Checker.

An operator is an umbrella term covering providers, deployers, importers, distributors, authorised representatives, and any other person subject to obligations under the EU AI Act.

Article 3(8) uses this collective term for convenience throughout the regulation. It does not create a separate set of obligations — each operator type has its own specific duties defined elsewhere in the Act.

Determine your role with the Compliance Checker to understand which operator category applies to you.

A notified body is an independent assessment organisation designated by an EU Member State to carry out third-party conformity assessments for high-risk AI systems.

Under Article 3(22) of the EU AI Act, notified bodies are required for specific high-risk categories — particularly biometric identification systems used by law enforcement and certain critical infrastructure AI. Most other high-risk systems can use internal self-assessment (Annex VI).

Learn about the conformity assessment process in the complete compliance guide.

Risk Classification

How the EU AI Act classifies AI systems by risk level.

A high-risk AI system is one that falls under either of two classification pathways defined in Article 6 of the EU AI Act: (1) AI used as a safety component in products governed by existing EU harmonised legislation listed in Annex I, or (2) AI deployed in specific use cases listed in Annex III.

High-risk systems face the most demanding compliance requirements: risk management (Article 9), data governance (Article 10), technical documentation (Annex IV), logging (Article 12), human oversight (Article 14), and conformity assessment before market placement.

Check your classification with the Compliance Checker or review the Annex III checklist.

Annex III of the EU AI Act lists eight critical areas where AI deployment is automatically classified as high-risk: biometrics, critical infrastructure, education, employment, essential services (credit, insurance), law enforcement, migration/border control, and administration of justice.

This annex is the most common trigger for high-risk classification among mid-market companies, particularly through the employment (Area 4) and essential services (Area 5) categories.

Validate your specific sector with our specialised tools: HR Validator, Credit Scorer, Insurance Assessor, or EdTech Validator.

Annex I of the EU AI Act lists existing EU harmonised legislation covering products where AI safety components trigger high-risk classification. This includes the Medical Devices Regulation (MDR), Machinery Regulation, Toy Safety Directive, and others.

Annex I obligations become enforceable on August 2, 2027 — one year after the main high-risk deadline — reflecting the additional complexity of aligning AI requirements with existing product safety frameworks.

Assess your safety component with the Safety Component Validator.

Prohibited practices are eight specific AI applications that the EU AI Act bans outright under Article 5. There is no compliance pathway — the only legal option is immediate cessation.

The prohibitions cover subliminal manipulation, exploitation of vulnerabilities, social scoring, predictive policing, untargeted facial recognition scraping, workplace/school emotion recognition, biometric categorisation inferring sensitive attributes, and real-time remote biometric identification in public spaces (with narrow law enforcement exceptions). These have been enforceable since February 2, 2025.

Read our detailed breakdown of all 8 prohibited practices or use the Biometric Validator.

A general-purpose AI model is an AI model trained on broad data using self-supervision at scale, capable of competently performing a wide range of distinct tasks regardless of how it is placed on the market. Major LLMs like GPT, Claude, and Gemini qualify.

Chapter V of the EU AI Act establishes GPAI-specific obligations enforceable since August 2, 2025: technical documentation, downstream deployer information, copyright compliance, and training data summaries. Models classified with systemic risk face additional adversarial testing requirements.

If you use GPAI models in your pipeline, check data quality with the RAG Pipeline Screener.

A GPAI model with systemic risk is one that has high-impact capabilities, as determined by computational training thresholds (currently 10^25 FLOPs) or direct designation by the European Commission.

Under Article 3(65) of the EU AI Act, systemic risk models face escalated obligations beyond standard GPAI rules: adversarial testing (red-teaming), continuous incident monitoring and reporting, cybersecurity protections, and detailed energy consumption disclosure. This classification currently applies to frontier models from a small number of major AI labs.

Obligations & Requirements

The specific compliance duties imposed by the EU AI Act.

A conformity assessment is the mandatory verification process proving a high-risk AI system meets all regulatory requirements before it can be placed on the EU market.

Article 43 of the EU AI Act defines two pathways: internal self-assessment under Annex VI (used for most Annex III systems) and third-party assessment by a notified body under Annex VII (required for biometric identification and critical infrastructure AI). Successful completion results in CE marking.

Learn more in the complete compliance guide.

CE marking is the mandatory European conformity mark that must be affixed to high-risk AI systems before they can be placed on the EU market.

Under Article 48 of the EU AI Act, CE marking signifies that the system has undergone conformity assessment, meets all applicable requirements, and is accompanied by an EU declaration of conformity. It must be visible, legible, and indelible.

Read about the full marking process in the compliance guide.

Technical documentation is the comprehensive set of records that high-risk AI system providers must create and maintain, covering system design, development methodology, training data, risk management, and testing results.

Annex IV of the EU AI Act specifies the mandatory contents in detail: system description, intended purpose, design specifications, data governance records, performance metrics, human oversight provisions, and cybersecurity measures. Documentation must be updated throughout the system lifecycle.

Gap-check your documentation against established frameworks with the Framework Gap Analyzer, or see detailed requirements in the compliance guide.

Human oversight refers to the measures that must be built into high-risk AI systems to enable effective supervision by natural persons during operation.

Article 14 of the EU AI Act requires providers to design systems so that humans can understand capabilities and limitations, monitor operations, interpret outputs, and intervene or override the system at any time. Deployers must implement these oversight measures in practice.

Track your oversight implementation with the Human Oversight Log and assess automation risks with the Automation Complacency Assessor.

AI literacy is the mandatory obligation for both providers and deployers to ensure that their staff and other persons dealing with AI systems have sufficient understanding of AI to operate these systems responsibly.

Article 4 of the EU AI Act has been legally enforceable since February 2, 2025 — it was one of the first provisions to take effect. Literacy must be proportionate to the technical knowledge, experience, education, and context of the persons involved.

Build your training programme with the AI Literacy Training Planner.

Transparency obligations are the set of disclosure and labelling requirements that apply to AI systems interacting with people or generating synthetic content.

Article 50 of the EU AI Act requires: disclosure to users when they interact with an AI system (chatbot notification), machine-detectable marking of AI-generated content (deepfakes, synthetic media), and notification when emotion recognition or biometric categorisation is used. Enforceable from August 2, 2026.

Check your duties with the Transparency Validator or the Content Marking Checker.

A FRIA is a mandatory pre-deployment assessment that certain deployers of high-risk AI must complete to evaluate the system's impact on fundamental rights of affected individuals.

Article 27 of the EU AI Act requires FRIAs for public authorities, private entities providing public services, and deployers of credit scoring or insurance pricing AI. The assessment must address non-discrimination, access to services, and human dignity in the specific deployment context — it is not a generic risk assessment.

Generate your FRIA with the FRIA Generator.

Data governance under the EU AI Act refers to the mandatory quality standards for training, validation, and testing datasets used in high-risk AI systems.

Article 10 requires that datasets be relevant, sufficiently representative, free of errors to the extent possible, and appropriate to the intended purpose. Providers must document data collection processes, selection criteria, potential biases, and gaps. This is one of the most prescriptive requirements in the regulation.

Validate your data pipeline with the Input Data Validator.

A quality management system is the overarching organisational framework that high-risk AI providers must implement to ensure consistent compliance across all regulatory requirements.

Article 17 of the EU AI Act requires the QMS to cover: compliance strategy, design and development procedures, testing processes, data management, risk management, post-market monitoring, incident reporting, communication with authorities, and record-keeping. It functions as the organisational glue holding all other obligations together.

Assess your framework readiness with the ISO-NIST Gap Analyzer.

Lifecycle & Market Terms

Terms governing how AI systems enter and operate in the EU market.

Placing on the market means the first making available of an AI system on the EU market, whether for distribution or use.

Article 3(9) of the EU AI Act defines this as the trigger point for provider obligations — conformity assessment, CE marking, and technical documentation must all be complete before this moment. It applies regardless of whether the system is sold commercially, offered for free, or made available through a SaaS model.

Use the Compliance Checker to assess your market entry obligations.

Putting into service means supplying an AI system for first use directly to the deployer or for own use, in accordance with its intended purpose.

Article 3(11) of the EU AI Act distinguishes this from placing on the market. A system can be put into service without being placed on the market — for example, when an organisation develops and deploys an AI system internally. Both events can independently trigger regulatory obligations.

Use the Compliance Checker to assess your obligations.

A substantial modification is any change to an AI system after placing on the market or putting into service that was not foreseen in the initial conformity assessment and that affects compliance or changes the intended purpose.

Under Article 3(23) of the EU AI Act, a substantial modification can transform a deployer into a provider — triggering all provider obligations including a new conformity assessment. This is one of the most underestimated risks in the regulation.

Check your exposure with the Accidental Provider Classifier.

Post-market monitoring is the systematic process that high-risk AI providers must implement to actively collect and analyse data on system performance after market placement.

Article 72 of the EU AI Act requires providers to establish a proportionate monitoring system using real-world performance data. The results feed into risk management updates, corrective actions, and serious incident reporting. This is not a one-time check — it is a continuous obligation throughout the system lifecycle.

Read about monitoring requirements in the complete compliance guide.

A serious incident is an event directly or indirectly caused by an AI system that leads to death, serious harm, critical infrastructure disruption, fundamental rights infringement, or serious property/environmental damage.

Article 3(49) of the EU AI Act defines the threshold. Both providers and deployers have reporting duties: providers to market surveillance authorities, deployers to providers and in some cases directly to authorities. Delays in reporting can constitute an independent violation.

Document and track incidents with the Human Oversight Log.

Governance & Frameworks

Broader governance structures and emerging operational concepts.

An AI governance framework is the organisational structure of policies, processes, roles, and controls an entity uses to manage AI risk and ensure responsible AI deployment.

While the EU AI Act does not define this term directly, it mandates the essential components: quality management systems (Article 17), risk management (Article 9), human oversight (Article 14), and data governance (Article 10). ISO/IEC 42001 and NIST AI RMF provide established framework models that align 70-80% with AI Act requirements.

Map your framework gaps with the ISO-NIST Gap Analyzer or read the complete framework crosswalk.

An AI regulatory sandbox is a controlled testing environment administered by a national competent authority, allowing providers to develop, train, and test AI systems under direct regulatory supervision.

Articles 57-62 of the EU AI Act establish the sandbox framework. Every EU Member State must create at least one by August 2, 2026. SMEs receive priority admission. Providers operating in good faith within a sandbox plan receive temporary immunity from administrative fines. Spain's AESIA has the most advanced operational sandbox with 12 participating companies.

Learn more in our sandbox guide.

Shadow AI is the unauthorised use of AI tools by employees — ChatGPT, Copilot, Midjourney, AI browser extensions — without organisational knowledge, approval, or governance.

While not defined in the EU AI Act itself, shadow AI creates three distinct regulatory risks: data leakage through confidential information pasted into AI interfaces, regulatory exposure from unmet deployer obligations under Article 26 if the use case is high-risk, and organisational liability because you are responsible for employees' AI use regardless of authorisation. You cannot comply with the EU AI Act if you do not know what AI your organisation is using.

Run the Shadow AI Discovery Protocol to identify unauthorised AI usage across your organisation.

Enforcement & Institutions

The authorities and mechanisms that enforce the EU AI Act.

A market surveillance authority is the national body designated by each EU Member State to monitor compliance, conduct audits, and enforce the EU AI Act within its territory.

Under Article 70, every Member State must designate at least one such authority by August 2, 2026. These bodies have powers to request documentation, access source code, order corrective measures, and impose financial penalties. Finland was the first Member State with fully operational enforcement (January 2026).

Track which authorities are active with the National Enforcement Tracker.

The EU AI Office is the European Commission body responsible for overseeing the broader implementation of the AI Act, coordinating cross-border enforcement, and directly supervising GPAI model compliance.

Established under Article 64 of the EU AI Act, the AI Office manages the AI Pact (voluntary pre-compliance initiative), develops guidelines, and has direct enforcement authority over providers of general-purpose AI models. It works alongside national authorities but has independent supervisory powers for GPAI.

An accidental provider is an organisation that unintentionally becomes a provider under the EU AI Act by substantially modifying, rebranding, or changing the intended purpose of a third-party AI system.

Article 25 defines the trigger: any change not foreseen in the original conformity assessment that affects compliance, or any change to the intended purpose. Many organisations fine-tune vendor models, adapt outputs, or wrap third-party APIs in ways that cross this threshold without realising it — inheriting all provider obligations overnight.

Check your exposure with the Accidental Provider Classifier.

Safety & Biometrics

Biometric identification and safety-specific terms.

Real-time remote biometric identification is the use of an AI system to identify natural persons at a distance without their active cooperation, by comparing biometric data against a reference database, with results delivered instantaneously or with minimal delay.

Article 3(42) of the EU AI Act defines this specifically. Its use in publicly accessible spaces by law enforcement is one of the prohibited practices under Article 5, subject to narrow exceptions requiring prior judicial authorisation and strict necessity criteria. This is among the most heavily restricted AI applications in the regulation.

Validate your biometric system with the Biometric Identity Validator.

Find out where you stand under the EU AI Act

28 free compliance tools. No login. No data collected.

Disclaimer: This glossary is for educational purposes only and does not constitute legal advice. The EU AI Act text is available at eur-lex.europa.eu. Consult qualified legal counsel for binding compliance decisions. Published by Move78 International Limited.