Executive Summary
Article 5 prohibits eight specific AI practices outright. These include social scoring, subliminal manipulation, workplace emotion recognition, and untargeted facial recognition scraping. These categorical bans have been actively enforceable since February 2, 2025.
Violations trigger maximum regulatory penalties reaching up to 35 million Euros or 7% of worldwide annual turnover. There is no remediation pathway. Immediate cessation is the only legal option.
Article 5 of the EU AI Act establishes an absolute regulatory floor. It defines eight categories of AI practices deemed fundamentally incompatible with European rights. These practices cannot be mitigated through compliance frameworks, conformity assessments, or regulatory sandboxes. They are banned unequivocally.
The European Commission has confirmed this list is exhaustive but subject to annual review and potential expansion to match technological evolution.
For an interactive technical screening, explore our Prohibited Practices module within the EU AI Compass application, or utilize our 12-question Compliance Checker to conduct a preliminary organizational audit.
1. Subliminal, Manipulative, or Deceptive Techniques: Article 5(1)(a)
This prohibition targets AI systems utilizing subliminal, manipulative, or deceptive techniques to materially distort human behavior. The legal threshold requires the distortion to cause, or be likely to cause, significant physical or psychological harm.
This extends far beyond standard commercial persuasion. It aggressively captures digital dark patterns deployed at scale. Example: An AI recommendation engine that actively exploits recognized cognitive biases to systematically steer elderly consumers toward highly unfavorable financial products.
2. Exploitation of Vulnerable Groups: Article 5(1)(b)
This provision bans AI systems engineered to exploit specific human vulnerabilities linked to age, disability, or socioeconomic circumstances. The regulatory objective is to prevent material behavioral distortion that results in significant harm.
This prohibition transcends general targeted advertising. It specifically isolates systems designed to capitalize on a user's reduced capacity for resistance. Example: An AI-driven telemarketing system that utilizes voice analysis to detect cognitive decline in elderly respondents, subsequently altering its sales script to exploit that confusion.
3. Social Scoring by Public Authorities: Article 5(1)(c)
Article 5 outlaws the deployment of AI systems by or on behalf of public authorities for social scoring. This includes evaluating or classifying individuals based on social behavior or personal characteristics, provided the resulting score leads to unjustified or disproportionate detrimental treatment.
This statute effectively bans state-operated social credit infrastructures. Example: A municipal government utilizing AI to score citizens' civic compliance, and subsequently restricting their access to public housing based on those automated classifications.
4. Predictive Policing Based Solely on Profiling: Article 5(1)(d)
This clause prohibits AI systems from assessing the risk of an individual committing a criminal offense based solely on profiling or personality traits. Risk assessments must be grounded in objective, verifiable facts directly linked to past criminal activity.
Law enforcement agencies retain the ability to utilize AI for factual evidence analysis. However, predictive policing driven purely by demographic or behavioral modeling is strictly illegal. Example: An algorithm flagging a citizen as a high-risk theft suspect based exclusively on their residential postal code, employment status, and web browsing history.
5. Untargeted Facial Recognition Database Scraping: Article 5(1)(e)
The Act strictly forbids the creation or expansion of facial recognition databases via the untargeted scraping of internet images or CCTV footage. This does not constitute a blanket ban on facial recognition technology.
Instead, it outlaws the mass, indiscriminate harvesting of biometric data without documented consent or specific targeting parameters. Example: A commercial software vendor automatically scraping millions of public social media profiles to train a proprietary facial identification model.
6. Emotion Recognition in Workplaces and Schools: Article 5(1)(f)
Deploying AI to infer the emotional states of individuals within workplaces or educational institutions is strictly prohibited. This represents a highly consequential commercial restriction for mid-market enterprises.
Corporate tools claiming to monitor employee engagement, stress levels, or student attentiveness via facial analysis or voice telemetry are now illegal in the EU. Narrow exemptions exist strictly for medical or safety purposes, such as detecting commercial driver fatigue. Violating this specific ban exposes enterprises to the maximum penalty tier.
7. Biometric Categorization Inferring Sensitive Attributes: Article 5(1)(g)
It is illegal to deploy AI systems that categorize natural persons based on biometric data to deduce sensitive attributes. These protected attributes include race, political opinions, trade union membership, religious beliefs, sex life, and sexual orientation.
Extracting demographic classifications from facial geometry or vocal patterns violates this mandate. Example: A retail analytics platform utilizing in-store cameras to sort customers by inferred ethnicity in order to serve hyper-targeted digital advertisements.
8. Real-Time Remote Biometric Identification in Public Spaces: Article 5(1)(h)
The use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes is banned. The legislation permits three exceptionally narrow law enforcement exemptions: searching for specific victims of serious crimes, preventing genuine threats to life or terrorist attacks, and identifying suspects of specific severe offenses.
Every single exception demands prior judicial or administrative authorization. For all private enterprises and non-law-enforcement entities, real-time public biometric identification is banned unconditionally.
Proposed 9th Prohibition: Non-Consensual Nudification AI
The European Parliament's JURI Committee has drafted an opinion proposing a ninth Article 5 prohibition: an explicit ban on AI systems that generate non-consensual sexually explicit images (commonly known as "nudification" or "deepfake" applications). This amendment is driven directly by the Grok deepfake crisis of December 2025 through February 2026, during which xAI's Grok chatbot generated non-consensual sexually explicit deepfakes of public figures and private individuals.
The crisis triggered a multi-jurisdictional regulatory response. Ireland's Data Protection Commission opened a formal EU privacy investigation on 16 February 2026. French prosecutors raided X's Paris offices on 3 February 2026. Spain ordered investigations into X, Meta, and TikTok for AI-generated CSAM. The European Parliament disabled all built-in AI features on lawmakers' devices on 17 February 2026. Irish MEP McNamara was appointed to lead the legislative response, and the S&D and Greens political groups filed similar amendments.
The proposed ban has strong political momentum. However, scope definition remains contentious, as the boundary between "nudification" systems and broader image-generation capabilities is technically difficult to draw. The amendment may be narrowed or deferred to avoid delaying the broader Digital Omnibus adoption. Organizations operating AI image generation systems should monitor this development closely, as adoption would create an immediate cessation obligation with no compliance pathway.
Practical Audit Steps

Organizations must rigorously audit their existing AI portfolios against these eight operational boundaries. Immediate scrutiny should be applied to:
- Employee monitoring and productivity tools (Prohibition 6).
- Customer analytics platforms utilizing biometric or behavioral profiling (Prohibitions 1, 2, and 7).
- Software architectures that scrape or aggregate facial imagery (Prohibition 5).
For visual explanations of each prohibition, browse our EU AI Act comic series to accelerate internal team training.
If an audit identifies a prohibited practice, the only legally sound response is immediate system termination. Prohibited practices offer no remediation window and no compliant deployment pathway.

About the author: Abhishek G Sharma is the founder of Move78 International Limited. He holds ISO 42001 Lead Auditor, CISA, CISM, CRISC, and CEH certifications. He brings over 20 years of practitioner experience in cybersecurity, AI governance, and enterprise risk management.
Disclaimer: This analysis is provided for educational purposes only and does not constitute formal legal advice. Consult qualified regulatory counsel before establishing binding compliance policies. Last updated: March 2026.
