Blog · Reviewed 9 May 2026 · 7 min read
EU AI Act Nudification Ban: What AI Deployers Should Check
A practical deployer review guide for the Digital Omnibus political-agreement item covering non-consensual intimate-content generation and AI-created child sexual abuse material.
Reviewed: 9 May 2026.
Source basis: Regulation (EU) 2024/1689, the European Commission 7 May 2026 Digital Omnibus announcement, the Council 7 May 2026 provisional-agreement release, and the European Commission AI Act FAQ. This page is educational and does not provide legal advice or compliance guarantees.
Quick answer: The EU's 7 May 2026 Digital Omnibus political agreement added a new prohibition track aimed at AI practices involving the generation of non-consensual sexual and intimate content, as well as AI-created child sexual abuse material. That matters now for vendor screening, internal use-case reviews, content-safety controls, and evidence files, even though the text still needs formal adoption.

What changed on 7 May 2026
The most visible new measure in the Omnibus agreement is the added prohibition on AI practices involving the generation of non-consensual sexual and intimate content or child sexual abuse material. The Council's wording describes a new provision added by the co-legislators in the provisional agreement, not a final enacted change already in force.
That distinction matters. EU AI Compass treats the measure as a planning and governance trigger now, while still labelling it as pending formal adoption and publication.
What the nudification ban is really about
This is not just a content-labeling issue. It is a prohibited-practices issue. A deployer does not need to operate a consumer "nudification app" to have review work to do.
| Risk bucket | Practical review question |
|---|---|
| Fake intimate image or video generation | Can the system generate or transform content that creates intimate synthetic media of real people without consent? |
| Workflow enablement | Do templates, shortcuts, plugins, or editing tools make prohibited generation materially easier? |
| Child sexual abuse material | Do safety controls specifically block child sexual abuse material and ambiguous age-related abuse patterns? |
| Third-party model exposure | Could imported models, wrappers, extensions, or APIs bypass the organisation's normal controls? |
What deployers should review now
- Use-case mapping: Identify whether internal or customer-facing image, video, avatar, or transformation features could be misused for intimate synthetic content.
- Vendor capability review: Ask providers whether their model, API, or app can generate, transform, or infer nudity or explicit intimate imagery.
- Safety control review: Check whether the provider blocks prompts, uploads, fine-tuning, and editing workflows linked to non-consensual intimate-content generation.
- Abuse reporting path: Confirm how flagged outputs, user complaints, and urgent escalations are handled.
- Acceptable-use controls: Check whether contracts, product terms, and internal policies prohibit these use cases.
- Evidence file: Retain due diligence records, policy updates, approval notes, and test results.

Vendor questions to ask
- Does the system generate or transform image, video, or avatar content in ways that could be used to create intimate synthetic media?
- Are prompts, uploads, and editing functions filtered for non-consensual intimate-content generation?
- Are minors' likenesses, age ambiguity, and sexualised outputs specifically covered by safety controls?
- Does the vendor monitor abuse patterns and retrain or patch safety controls?
- Can the vendor provide test evidence, policy documentation, incident-handling procedures, or model cards?
- Are logs, moderation decisions, and user-reporting events retained in an auditable way?
- Can the system be fine-tuned or extended through third-party plugins that bypass safeguards?
Evidence to retain
| Evidence artifact | Why it matters |
|---|---|
| Feature inventory entry | Shows the system, owner, purpose, users, data categories, and content-generation capability. |
| Vendor questionnaire response | Records what the supplier said about model capability, restrictions, and safety controls. |
| Safety-control summary | Documents prompt filtering, upload checks, moderation, abuse reporting, and escalation routes. |
| Approval or rejection note | Shows whether the feature was approved, restricted, disabled, or escalated for legal review. |
| Regulatory status note | Separates current-law baseline decisions from Digital Omnibus provisional-agreement watch items. |
Common mistakes
The biggest mistake: treating this as a consumer-app issue only. Enterprise teams often use image, avatar, video, design, HR, marketing, and support tools without checking the underlying model capability or plugin path.
- assuming "we are only a deployer" means no review is needed
- reviewing only the front-end feature while ignoring the model and API capability
- accepting marketing claims instead of asking for test evidence
- ignoring plugins, wrappers, fine-tuning, or editing modules
- calling the new item final law before formal adoption and publication
What to do next
If a system touches image generation, transformation, face editing, avatar creation, or visual manipulation, review it now. Update the AI system inventory, run vendor due diligence, confirm prohibited-practice exposure, and keep an evidence trail that records what was checked and when.
FAQ
Direct answers on the provisional nudification-ban track, deployer review duties, vendor checks, and evidence retention.
No. The EU AI Act nudification ban is part of the 7 May 2026 Digital Omnibus political agreement. It still requires formal adoption and publication before it changes the legal text. Treat it as a planning and evidence-review trigger, not as final enacted law.
The provisional nudification ban targets AI practices involving the generation of non-consensual sexual or intimate content and AI-created child sexual abuse material. The final legal wording and numbering should be checked after formal adoption, because the current published AI Act baseline remains Regulation (EU) 2024/1689.
Deployers still choose vendors, activate features, approve workflows, and expose users to system outputs. A deployer evidence file should record vendor capability checks, safety controls, acceptable-use restrictions, escalation paths, and the decision note explaining whether a feature was approved, restricted, or rejected.
No. The nudification-ban review is not limited to consumer photo apps. Enterprise image, video, avatar, marketing, HR, customer-support, or design workflows can raise review questions if third-party models, plugins, or editing features could enable intimate synthetic-content generation.
An organisation should keep the use-case review, vendor questionnaire, safety-control summary, approval or rejection note, escalation path, and dated legal-status record. The legal-status record should separate current-law baseline decisions from Digital Omnibus provisional-agreement watch items.