Blog · March 2026 · 10 min read

All 8 Prohibited AI Practices Under Article 5 — Explained With Examples

Prohibited AI practices icon with red ban symbol over neural network background EU AI Act
Article 5: Eight AI practices banned outright. No compliance pathway exists — only cessation.
Already enforceable. These prohibitions have been in force since 2 February 2025. Violations carry fines of up to €35 million or 7% of worldwide annual turnover — the highest penalty tier under the EU AI Act.

Article 5 of the EU AI Act establishes an absolute floor: eight categories of AI practices that are considered so harmful to fundamental rights that no compliance framework can make them acceptable. They are banned outright. There is no conformity assessment pathway, no sandbox exemption, and no transitional period remaining.

For an interactive walkthrough of these prohibitions, explore our Prohibited Practices module in the EU AI Compass app, or take the 12-question Compliance Checker which includes a prohibited practice screening step.

1. Subliminal, Manipulative, or Deceptive Techniques — Article 5(1)(a)

AI systems that deploy subliminal techniques, purposefully manipulative methods, or deceptive approaches that materially distort a person's behaviour, causing or likely to cause significant harm. The key threshold is "materially distort" — the AI must meaningfully change how a person would otherwise act, beyond normal commercial persuasion. Think dark patterns at scale: AI that exploits cognitive biases to push elderly users toward unfavourable financial products, or systems that use personalised psychological profiling to manipulate purchasing decisions in ways the user cannot recognise.

2. Exploitation of Vulnerable Groups — Article 5(1)(b)

AI systems that exploit vulnerabilities related to age, disability, or specific social or economic circumstances to materially distort behaviour in a way that causes significant harm. This goes beyond general-purpose advertising. It targets AI specifically designed to take advantage of reduced capacity to resist — for example, AI-driven marketing that uses voice analysis to detect cognitive decline in elderly customers and then adjusts sales tactics accordingly, or gamified apps that exploit children's developing decision-making capacity.

3. Social Scoring by Public Authorities — Article 5(1)(c)

AI systems used by or on behalf of public authorities for evaluating or classifying individuals based on their social behaviour or personal characteristics, where the resulting social score leads to detrimental treatment that is either unjustified or disproportionate. This prohibition targets government-operated social credit systems. A municipal authority using AI to score residents' civic behaviour and then restricting access to public services based on those scores would be a clear violation. The restriction applies specifically to public authorities and those acting on their behalf.

4. Predictive Policing Based Solely on Profiling — Article 5(1)(d)

AI systems that assess the risk of an individual committing a criminal offence based solely on profiling or personality traits, without being based on objective and verifiable facts directly linked to criminal activity. Law enforcement can still use AI to analyse factual evidence, but cannot deploy systems that predict criminal behaviour purely from demographic, psychological, or behavioural profiling. A system flagging an individual as likely to commit theft based on their neighbourhood, employment status, and online activity — without any factual connection to criminal conduct — falls squarely within this prohibition.

5. Untargeted Facial Recognition Database Scraping — Article 5(1)(e)

AI systems that create or expand facial recognition databases through untargeted scraping of images from the internet or CCTV footage. This does not ban facial recognition technology entirely. It bans the specific practice of building reference databases by indiscriminately harvesting facial images without the knowledge or consent of the individuals depicted. The distinction is important: a company that scrapes social media profiles to build a facial recognition database violates this provision; a border control system that matches travellers against a lawfully compiled watchlist may not (though it likely triggers other requirements).

6. Emotion Recognition in Workplaces and Schools — Article 5(1)(f)

AI systems used to infer emotions of individuals in workplaces and educational institutions. This is one of the more commercially impactful prohibitions. Employee monitoring tools that claim to detect engagement, stress, or satisfaction through facial analysis, voice patterns, or physiological signals are banned in workplace and educational settings. The exception is narrow: emotion recognition remains permitted for medical or safety purposes (for example, detecting driver fatigue). If your organisation uses any tool marketed as measuring "employee sentiment" or "student engagement" through biometric analysis, review it immediately against this provision.

7. Biometric Categorisation Inferring Sensitive Attributes — Article 5(1)(g)

AI systems that categorise individuals based on biometric data to infer race, political opinions, trade union membership, religious beliefs, sex life, or sexual orientation. Using facial features, voice characteristics, or other biometric markers to sort people into categories based on protected characteristics is prohibited. A retail analytics system that uses in-store cameras to infer customers' ethnicity for targeted marketing would violate this provision.

8. Real-Time Remote Biometric Identification in Public Spaces — Article 5(1)(h)

Real-time remote biometric identification systems used in publicly accessible spaces for law enforcement purposes. This is the most heavily qualified prohibition, with three narrow exceptions for law enforcement: searching for specific victims (abduction, trafficking, sexual exploitation), preventing a genuine and present threat to life or a foreseeable terrorist attack, and locating or identifying a suspect for specific serious criminal offences. Each exception requires prior judicial or administrative authorisation. For all non-law-enforcement applications, real-time biometric identification in public spaces is prohibited without exception.

Practical Audit Steps

Infographic explaining all 8 prohibited AI practices: Manipulation, Vulnerable Groups, Social Scoring, Predictive Policing, Facial Scraping, Emotion Recognition, Biometric Categorization, Real-Time Biometrics
All eight Article 5 prohibitions at a glance. Violations carry €35M or 7% global turnover penalties.

Review your AI inventory against each of these eight categories. Pay particular attention to employee monitoring tools (prohibition 6), customer analytics platforms using biometric or behavioural profiling (prohibitions 1, 2, 7), and any tools that scrape or compile facial data (prohibition 5). For visual explanations of each prohibition, browse the EU AI Act comic series — several comics address specific Article 5 scenarios.

If you identify a potential prohibited practice in your portfolio, the correct response is immediate cessation, not remediation. Unlike high-risk classification where you can build compliance over time, prohibited practices have no compliant deployment pathway.

Emergency stop button in boardroom with 35 million Euro penalty alert on background screen EU AI Act enforcement
There is no remediation pathway for prohibited practices. The only compliant response is to stop.

About the author: Abhishek G Sharma is the founder of Move78 International Limited and holds ISO 42001 Lead Auditor, CISA, CISM, CRISC, and CEH certifications.

Disclaimer: This article is for educational purposes only and does not constitute legal advice. Consult qualified legal counsel before making binding compliance decisions. Last updated: March 2026.