Regulatory Impact
Between December 2025 and February 2026, xAI's Grok chatbot generated non-consensual sexually explicit deepfakes of public figures and private individuals, triggering a coordinated regulatory response across the EU.
The crisis has directly led to a proposed ninth Article 5 prohibition targeting non-consensual nudification AI, advanced through the Digital Omnibus amendments. This is not yet law, but political momentum is strong.

Timeline of Events
The crisis unfolded in three phases. In December 2025, users discovered that xAI's Grok chatbot could generate sexually explicit images of real, identifiable individuals without consent. The capabilities were rapidly exploited at scale, targeting both public figures and private citizens. Unlike previous deepfake incidents, the Grok system's ease of use and wide availability through X (formerly Twitter) made mass generation trivially accessible.
In January and February 2026, the regulatory response intensified. French prosecutors raided X's Paris offices on 3 February 2026. Spain ordered investigations into X, Meta, and TikTok for AI-generated CSAM distribution. On 16 February 2026, Ireland's Data Protection Commission opened a formal EU-wide privacy investigation into xAI under GDPR cross-border enforcement mechanisms. On 17 February, the European Parliament disabled all built-in AI features on lawmakers' devices, citing security and integrity risks.
Simultaneously, the political response crystallized. The European Parliament's JURI Committee drafted an opinion proposing an explicit Article 5 prohibition on non-consensual nudification AI. Irish MEP McNamara was appointed to lead the legislative response. The S&D and Greens political groups filed similar amendments to the Digital Omnibus.
The Proposed Nudification Ban
The JURI Committee's proposed amendment would add a ninth prohibited practice to Article 5, explicitly banning AI systems that generate non-consensual sexually explicit images. This would sit alongside the existing eight prohibitions covering subliminal manipulation, exploitation of vulnerable groups, social scoring, predictive policing, facial recognition scraping, workplace emotion recognition, biometric categorization, and real-time remote biometric identification.
The amendment carries strong political momentum. Multiple political groups support it, and the Grok crisis has created public demand for action. However, defining the boundary between "nudification" systems and broader image-generation AI remains a significant technical challenge. A prohibition that is too narrow fails to prevent harm; one that is too broad risks capturing legitimate creative, medical, and research applications.
The amendment is being advanced through the Digital Omnibus legislative process. The IMCO-LIBE committee vote is expected around 18 March 2026, though the nudification provision may be handled separately to avoid delaying the broader deadline extension.
Existing Legal Framework
The Grok crisis has exposed a gap in the current Article 5 prohibited practices list. While the existing prohibitions cover social scoring, manipulative techniques, and certain biometric uses, they do not explicitly address non-consensual intimate image generation. The deepfake labelling requirement under Article 50(4) addresses disclosure obligations but does not prohibit the generation itself.
GDPR provides a complementary enforcement vector. The DPC's investigation focuses on the processing of personal data (biometric data derived from publicly available images) without lawful basis. This is the route most likely to produce near-term enforcement outcomes, as GDPR mechanisms are well-established and the DPC has cross-border enforcement powers.

Implications for Your Organization
If you operate image generation AI: Audit your system's safeguards against generating non-consensual intimate imagery. Regardless of whether the formal prohibition passes, deploying such capabilities carries extreme reputational and legal risk under existing GDPR, national criminal law, and the AI Act's manipulation provisions.
If you deploy AI content generation more broadly: The crisis has accelerated Article 50 enforcement urgency. Content marking, watermarking, and provenance tracking are no longer optional features but imminent legal requirements. See our Article 50 Code of Practice analysis for the specific technical requirements.
If you are monitoring Article 5 compliance: Review our complete breakdown of all eight current prohibitions plus the proposed ninth. Use the 12-question Compliance Checker to verify your AI portfolio against these boundaries.
About the author: Abhishek G Sharma is the founder of Move78 International Limited. He holds ISO 42001 Lead Auditor, CISA, CISM, CRISC, and CEH certifications. He brings over 20 years of practitioner experience in cybersecurity, AI governance, and enterprise risk management.
Disclaimer: This analysis is for educational purposes only. The proposed nudification prohibition is not yet adopted law. Consult qualified legal counsel for binding compliance decisions. Published: March 2026.