Enforceable Since 2 February 2025

Prohibited AI Practices Under the EU AI Act

All 8 Article 5 bans explained. These AI practices are completely illegal in the EU. Violations carry fines up to €35 million or 7% of worldwide annual turnover.

Interactive reference · Free · No login required

EU AI Act Article 5 prohibited AI practices showing 8 banned categories including social scoring, emotion recognition, biometric identification, and subliminal manipulation

These bans are already in force

Article 5 prohibitions took effect on 2 February 2025 — the very first enforcement milestone of the EU AI Act. If you're deploying any AI system that falls into these categories, you're already in violation. The penalty ceiling is the Act's highest: up to €35 million or 7% of global annual turnover, whichever is greater (Article 99).

All 8 Prohibited Practices at a Glance

Ref Prohibited Practice Who’s Affected Exceptions?
5(1)(a)Subliminal / manipulative techniquesAll sectorsNone
5(1)(b)Exploitation of vulnerabilitiesAll sectorsNone
5(1)(c)Social scoring by public authoritiesPublic sectorNone
5(1)(d)Criminal risk profiling (sole basis)Law enforcementPartial — human-augmented assessments with objective facts permitted
5(1)(e)Untargeted facial recognition scrapingAll sectorsNone
5(1)(f)Emotion recognition in workplaces & schoolsEmployers, educational institutionsYes — medical or safety reasons
5(1)(g)Biometric categorisation on sensitive attributesAll sectorsNone
5(1)(h)Real-time remote biometric ID (public spaces)Law enforcementYes — 3 narrow law enforcement scenarios with judicial authorisation

Detailed Breakdown: Each Prohibition Explained

Click any practice to expand the full explanation, real-world examples, and applicable exceptions.

This ban targets AI systems that use subliminal techniques beyond a person's consciousness, purposefully manipulative techniques, or deceptive methods to materially distort a person's or group's behaviour. The distortion must cause or be reasonably likely to cause significant harm. It doesn't require intent to harm — the manipulative technique itself triggers the prohibition if significant harm is a reasonably foreseeable outcome.

Real-world examples

Dark patterns that exploit cognitive biases to force purchases or consent. AI-driven persuasion engines that tailor manipulative messaging to individual psychological profiles. Recommendation systems deliberately designed to promote addictive behaviour in users. AI-generated content designed to covertly shift political opinions through imperceptible framing.

AI systems that exploit vulnerabilities of a specific group of persons due to their age, disability, or a specific social or economic situation are banned when the exploitation materially distorts their behaviour in a harmful manner. This is separate from (a) because it doesn't require subliminal or deceptive techniques — it targets the exploitation of existing vulnerability itself. The test is whether the AI system takes advantage of characteristics that reduce a person's ability to make autonomous decisions.

Real-world examples

AI chatbots designed to build trust relationships with elderly people to extract financial commitments. AI-powered toy companions that manipulate children's behaviour through emotional exploitation. Predatory lending algorithms that specifically target people in financial distress with high-interest products. AI marketing systems that identify and exploit users with gambling addictions.

This bans AI systems used for evaluating or classifying natural persons over a period of time based on their social behaviour or known, inferred, or predicted personal or personality characteristics, where the social score leads to detrimental treatment that's either unjustified or disproportionate to context, or in social contexts unrelated to where the data was originally generated. The ban applies primarily to public authorities but also covers private actors performing public functions.

Real-world examples

Government citizen "trust scores" that restrict access to public services based on behavioural data. Municipal systems that deny housing or welfare benefits based on aggregated social behaviour scores. Any system that grades citizens and uses those grades to limit their rights or access to services — the archetypal "China-style social credit" pattern applied in an EU context.

AI that assesses or predicts the risk that a specific natural person will commit a criminal offence, based solely on profiling that person or evaluating their personality traits and characteristics, is prohibited. The key word is "solely" — the prohibition doesn't apply when AI augments human assessments that are already based on objective, verifiable facts directly linked to criminal activity. This draws a line between pure predictive profiling (banned) and evidence-augmented risk assessment (permitted under high-risk rules).

Real-world examples

Predictive policing systems that flag individuals as "likely offenders" based on demographics, neighbourhood, or personality profiling alone. AI tools that generate risk scores for individuals using social media activity, purchasing patterns, or facial features without any connection to actual criminal evidence.

Partial exception

AI systems that support human assessments grounded in objective, verifiable facts directly linked to criminal activity aren't caught by this ban. However, they'd likely qualify as high-risk AI under Annex III (law enforcement domain) and would need to meet full compliance requirements under Articles 8-15.

Creating or expanding facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage is outright banned. "Untargeted" is the operative word — this catches bulk, indiscriminate collection without specific suspicion or consent. It doesn't matter whether the scraping is done by a public authority or a private company. If you're hoovering up faces from the open internet or surveillance cameras to build a recognition database, you're violating Article 5.

Real-world examples

Clearview AI-style operations that scrape billions of facial images from social media platforms, news sites, and public databases. Security companies building identification databases from CCTV footage captured in public spaces. Any provider or deployer building a facial recognition training dataset through mass, non-consensual image collection.

AI systems that infer emotions of natural persons in workplaces and educational institutions are banned. This covers any technology that attempts to read, classify, or score emotional states — whether through facial expression analysis, voice tone detection, keystroke dynamics, or physiological signals. The ban is context-specific: it applies in workplaces and schools, not everywhere.

Real-world examples

Employee monitoring software that tracks facial expressions during video calls to measure "engagement" or "satisfaction." Classroom AI tools that score student attention or stress levels based on webcam analysis. HR interview platforms that rate candidates on detected emotional responses.

Exceptions

Emotion recognition is permitted in workplaces and schools when used for medical reasons (detecting a medical emergency) or safety purposes (monitoring fatigue in operators of dangerous machinery, detecting distress in pilots). Outside workplaces and schools — in retail, entertainment, or healthcare settings — emotion recognition isn't banned but falls under Article 50 transparency obligations requiring disclosure to affected persons.

AI systems that categorise individual natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation are banned. This doesn't cover lawful biometric processing for other purposes (identity verification, access control) — it specifically targets the inference of sensitive personal attributes from biometric signals.

Real-world examples

AI scanning facial features to classify employees by ethnicity or predict political affiliation. Voice analysis systems attempting to determine sexual orientation. Gait recognition systems used to infer religious practice patterns. Any system that uses physical or behavioural biometric data as a proxy to categorise people into sensitive-attribute groups.

Using AI for real-time remote biometric identification in publicly accessible spaces for law enforcement purposes is banned, with three narrow exceptions. "Real-time" means the capture, comparison, and identification happen without significant delay — this is live surveillance, not retrospective analysis. "Publicly accessible spaces" covers streets, parks, shopping centres, train stations, and any space the general public can access.

Three narrow law enforcement exceptions (Article 5(2)-(3))

1. Victim search: Targeted search for specific victims of abduction, trafficking, or sexual exploitation. 2. Terrorist threat prevention: Preventing a specific, substantial, and imminent threat to life or a foreseeable terrorist attack. 3. Serious crime suspects: Locating or identifying a person suspected of committing a specific serious crime (as defined in the regulation). Each use requires prior judicial or independent administrative authorisation, except in duly justified cases of urgency where authorisation must be sought within 24 hours. If authorisation is refused, the use must stop immediately and all data must be deleted.

Prohibited vs. High-Risk vs. Transparency-Only

Understanding where prohibited practices sit in the broader risk classification helps you assess your own AI systems correctly.

Dimension Prohibited (Art. 5) High-Risk (Annex III) Limited Risk (Art. 50)
StatusCompletely bannedPermitted with strict requirementsPermitted with transparency disclosure
Enforcement date2 Feb 20252 Aug 20262 Aug 2026
Maximum fine€35M / 7% turnover€15M / 3% turnover€15M / 3% turnover
Compliance pathShut it downConformity assessment + ongoing obligationsDisclosure to users
ExampleSocial scoring systemAI hiring screening toolCustomer service chatbot

Check Your AI Systems

Not sure whether your AI system crosses the line? Run it through the free compliance checker — it evaluates your system against all 8 prohibitions plus high-risk classification criteria.

Frequently Asked Questions

When did the EU AI Act prohibited practices take effect?
All 8 prohibited AI practices under Article 5 became enforceable on 2 February 2025. This was the first enforcement milestone, arriving just 6 months after the EU AI Act entered into force on 1 August 2024. Organisations that haven't already reviewed their AI portfolio against these prohibitions are already exposed to enforcement risk.
What are the penalties for deploying a prohibited AI system?
Violations of Article 5 carry the Act's highest penalty tier: up to €35 million or 7% of worldwide annual turnover, whichever is higher (Article 99). For SMEs and startups, the lower of the two figures applies. Each EU member state sets its own administrative fine regime within these ceilings. This makes prohibited practice violations significantly more expensive than other non-compliance categories (€15M/3% for high-risk obligations, €7.5M/1% for incorrect information).
Does the biometric identification ban have any exceptions?
Yes. Article 5(1)(h) permits real-time remote biometric identification in publicly accessible spaces for law enforcement in three narrow scenarios: searching for specific victims of crime (abduction, trafficking, sexual exploitation), preventing a specific and imminent terrorist threat, and locating suspects of specifically defined serious crimes. Each use requires prior judicial or independent administrative authorisation, except in duly justified cases of urgency (authorisation must then be sought within 24 hours; if refused, all use stops and data is deleted).
Is emotion recognition completely banned?
No. Article 5(1)(f) bans emotion recognition specifically in workplaces and educational institutions. Medical and safety-purpose uses (detecting driver fatigue, monitoring pilot alertness) are exempted even in those settings. Outside workplaces and schools — retail, entertainment, healthcare — emotion recognition isn't prohibited but triggers Article 50 transparency obligations. The affected person must be informed that an emotion recognition system is operating.
How do I check if my AI system is a prohibited practice?
Use the EU AI Compass Compliance Checker — a free 12-question assessment that evaluates your AI system's function, deployment context, and affected populations against all 8 Article 5 prohibitions. It also checks high-risk classification under Annex III. For definitive legal analysis, consult qualified legal counsel. The checker is a screening tool, not a legal opinion.

Related Resources

Need Audit-Ready Compliance Toolkits?

Templates, checklists, and evidence packs built for the August 2, 2026 deadline.

View Compliance Toolkits →
Disclaimer & Legal Basis

This page is provided for educational and informational purposes only by Move78 International Limited. It doesn't constitute legal advice, regulatory guidance, or professional consultation. The content represents our interpretation of the EU Artificial Intelligence Act (Regulation (EU) 2024/1689) and may evolve as enforcement develops.

Primary sources: Regulation (EU) 2024/1689 (Eur-Lex) · AI Act Service Desk · European Commission AI Policy