These bans are already in force
Article 5 prohibitions took effect on 2 February 2025 — the very first enforcement milestone of the EU AI Act. If you're deploying any AI system that falls into these categories, you're already in violation. The penalty ceiling is the Act's highest: up to €35 million or 7% of global annual turnover, whichever is greater (Article 99).
All 8 Prohibited Practices at a Glance
| Ref | Prohibited Practice | Who’s Affected | Exceptions? |
|---|---|---|---|
| 5(1)(a) | Subliminal / manipulative techniques | All sectors | None |
| 5(1)(b) | Exploitation of vulnerabilities | All sectors | None |
| 5(1)(c) | Social scoring by public authorities | Public sector | None |
| 5(1)(d) | Criminal risk profiling (sole basis) | Law enforcement | Partial — human-augmented assessments with objective facts permitted |
| 5(1)(e) | Untargeted facial recognition scraping | All sectors | None |
| 5(1)(f) | Emotion recognition in workplaces & schools | Employers, educational institutions | Yes — medical or safety reasons |
| 5(1)(g) | Biometric categorisation on sensitive attributes | All sectors | None |
| 5(1)(h) | Real-time remote biometric ID (public spaces) | Law enforcement | Yes — 3 narrow law enforcement scenarios with judicial authorisation |
Detailed Breakdown: Each Prohibition Explained
Click any practice to expand the full explanation, real-world examples, and applicable exceptions.
This ban targets AI systems that use subliminal techniques beyond a person's consciousness, purposefully manipulative techniques, or deceptive methods to materially distort a person's or group's behaviour. The distortion must cause or be reasonably likely to cause significant harm. It doesn't require intent to harm — the manipulative technique itself triggers the prohibition if significant harm is a reasonably foreseeable outcome.
Real-world examples
Dark patterns that exploit cognitive biases to force purchases or consent. AI-driven persuasion engines that tailor manipulative messaging to individual psychological profiles. Recommendation systems deliberately designed to promote addictive behaviour in users. AI-generated content designed to covertly shift political opinions through imperceptible framing.
AI systems that exploit vulnerabilities of a specific group of persons due to their age, disability, or a specific social or economic situation are banned when the exploitation materially distorts their behaviour in a harmful manner. This is separate from (a) because it doesn't require subliminal or deceptive techniques — it targets the exploitation of existing vulnerability itself. The test is whether the AI system takes advantage of characteristics that reduce a person's ability to make autonomous decisions.
Real-world examples
AI chatbots designed to build trust relationships with elderly people to extract financial commitments. AI-powered toy companions that manipulate children's behaviour through emotional exploitation. Predatory lending algorithms that specifically target people in financial distress with high-interest products. AI marketing systems that identify and exploit users with gambling addictions.
This bans AI systems used for evaluating or classifying natural persons over a period of time based on their social behaviour or known, inferred, or predicted personal or personality characteristics, where the social score leads to detrimental treatment that's either unjustified or disproportionate to context, or in social contexts unrelated to where the data was originally generated. The ban applies primarily to public authorities but also covers private actors performing public functions.
Real-world examples
Government citizen "trust scores" that restrict access to public services based on behavioural data. Municipal systems that deny housing or welfare benefits based on aggregated social behaviour scores. Any system that grades citizens and uses those grades to limit their rights or access to services — the archetypal "China-style social credit" pattern applied in an EU context.
AI that assesses or predicts the risk that a specific natural person will commit a criminal offence, based solely on profiling that person or evaluating their personality traits and characteristics, is prohibited. The key word is "solely" — the prohibition doesn't apply when AI augments human assessments that are already based on objective, verifiable facts directly linked to criminal activity. This draws a line between pure predictive profiling (banned) and evidence-augmented risk assessment (permitted under high-risk rules).
Real-world examples
Predictive policing systems that flag individuals as "likely offenders" based on demographics, neighbourhood, or personality profiling alone. AI tools that generate risk scores for individuals using social media activity, purchasing patterns, or facial features without any connection to actual criminal evidence.
Partial exception
AI systems that support human assessments grounded in objective, verifiable facts directly linked to criminal activity aren't caught by this ban. However, they'd likely qualify as high-risk AI under Annex III (law enforcement domain) and would need to meet full compliance requirements under Articles 8-15.
Creating or expanding facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage is outright banned. "Untargeted" is the operative word — this catches bulk, indiscriminate collection without specific suspicion or consent. It doesn't matter whether the scraping is done by a public authority or a private company. If you're hoovering up faces from the open internet or surveillance cameras to build a recognition database, you're violating Article 5.
Real-world examples
Clearview AI-style operations that scrape billions of facial images from social media platforms, news sites, and public databases. Security companies building identification databases from CCTV footage captured in public spaces. Any provider or deployer building a facial recognition training dataset through mass, non-consensual image collection.
AI systems that infer emotions of natural persons in workplaces and educational institutions are banned. This covers any technology that attempts to read, classify, or score emotional states — whether through facial expression analysis, voice tone detection, keystroke dynamics, or physiological signals. The ban is context-specific: it applies in workplaces and schools, not everywhere.
Real-world examples
Employee monitoring software that tracks facial expressions during video calls to measure "engagement" or "satisfaction." Classroom AI tools that score student attention or stress levels based on webcam analysis. HR interview platforms that rate candidates on detected emotional responses.
Exceptions
Emotion recognition is permitted in workplaces and schools when used for medical reasons (detecting a medical emergency) or safety purposes (monitoring fatigue in operators of dangerous machinery, detecting distress in pilots). Outside workplaces and schools — in retail, entertainment, or healthcare settings — emotion recognition isn't banned but falls under Article 50 transparency obligations requiring disclosure to affected persons.
AI systems that categorise individual natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation are banned. This doesn't cover lawful biometric processing for other purposes (identity verification, access control) — it specifically targets the inference of sensitive personal attributes from biometric signals.
Real-world examples
AI scanning facial features to classify employees by ethnicity or predict political affiliation. Voice analysis systems attempting to determine sexual orientation. Gait recognition systems used to infer religious practice patterns. Any system that uses physical or behavioural biometric data as a proxy to categorise people into sensitive-attribute groups.
Using AI for real-time remote biometric identification in publicly accessible spaces for law enforcement purposes is banned, with three narrow exceptions. "Real-time" means the capture, comparison, and identification happen without significant delay — this is live surveillance, not retrospective analysis. "Publicly accessible spaces" covers streets, parks, shopping centres, train stations, and any space the general public can access.
Three narrow law enforcement exceptions (Article 5(2)-(3))
1. Victim search: Targeted search for specific victims of abduction, trafficking, or sexual exploitation. 2. Terrorist threat prevention: Preventing a specific, substantial, and imminent threat to life or a foreseeable terrorist attack. 3. Serious crime suspects: Locating or identifying a person suspected of committing a specific serious crime (as defined in the regulation). Each use requires prior judicial or independent administrative authorisation, except in duly justified cases of urgency where authorisation must be sought within 24 hours. If authorisation is refused, the use must stop immediately and all data must be deleted.
Prohibited vs. High-Risk vs. Transparency-Only
Understanding where prohibited practices sit in the broader risk classification helps you assess your own AI systems correctly.
| Dimension | Prohibited (Art. 5) | High-Risk (Annex III) | Limited Risk (Art. 50) |
|---|---|---|---|
| Status | Completely banned | Permitted with strict requirements | Permitted with transparency disclosure |
| Enforcement date | 2 Feb 2025 | 2 Aug 2026 | 2 Aug 2026 |
| Maximum fine | €35M / 7% turnover | €15M / 3% turnover | €15M / 3% turnover |
| Compliance path | Shut it down | Conformity assessment + ongoing obligations | Disclosure to users |
| Example | Social scoring system | AI hiring screening tool | Customer service chatbot |
Check Your AI Systems
Not sure whether your AI system crosses the line? Run it through the free compliance checker — it evaluates your system against all 8 prohibitions plus high-risk classification criteria.
Frequently Asked Questions
When did the EU AI Act prohibited practices take effect?
What are the penalties for deploying a prohibited AI system?
Does the biometric identification ban have any exceptions?
Is emotion recognition completely banned?
How do I check if my AI system is a prohibited practice?
Related Resources
Need Audit-Ready Compliance Toolkits?
Templates, checklists, and evidence packs built for the August 2, 2026 deadline.
View Compliance Toolkits →Disclaimer & Legal Basis
This page is provided for educational and informational purposes only by Move78 International Limited. It doesn't constitute legal advice, regulatory guidance, or professional consultation. The content represents our interpretation of the EU Artificial Intelligence Act (Regulation (EU) 2024/1689) and may evolve as enforcement develops.
Primary sources: Regulation (EU) 2024/1689 (Eur-Lex) · AI Act Service Desk · European Commission AI Policy