EU AI Act update, 9 May 2026: current law remains the baseline. The Digital Omnibus provisional agreement would move many high-risk AI obligations to 2 Dec 2027 and product-integrated high-risk AI rules to 2 Aug 2028 if formally adopted. Track status EU AI Act update: current law remains the baseline. Digital Omnibus dates apply only if formally adopted. Track status

Blog · Reviewed 9 May 2026 · 7 min read

Pending formal adoptionProhibited-practice watch

EU AI Act Nudification Ban: What AI Deployers Should Check

A practical deployer review guide for the Digital Omnibus political-agreement item covering non-consensual intimate-content generation and AI-created child sexual abuse material.

Reviewed: 9 May 2026.

Source basis: Regulation (EU) 2024/1689, the European Commission 7 May 2026 Digital Omnibus announcement, the Council 7 May 2026 provisional-agreement release, and the European Commission AI Act FAQ. This page is educational and does not provide legal advice or compliance guarantees.

Quick answer: The EU's 7 May 2026 Digital Omnibus political agreement added a new prohibition track aimed at AI practices involving the generation of non-consensual sexual and intimate content, as well as AI-created child sexual abuse material. That matters now for vendor screening, internal use-case reviews, content-safety controls, and evidence files, even though the text still needs formal adoption.

Compliance team reviewing AI content safety controls and the provisional EU AI Act nudification ban for non-consensual intimate-content generation
AI content-safety review for vendor due diligence, prohibited-use screening, and evidence retention.

What changed on 7 May 2026

The most visible new measure in the Omnibus agreement is the added prohibition on AI practices involving the generation of non-consensual sexual and intimate content or child sexual abuse material. The Council's wording describes a new provision added by the co-legislators in the provisional agreement, not a final enacted change already in force.

That distinction matters. EU AI Compass treats the measure as a planning and governance trigger now, while still labelling it as pending formal adoption and publication.

What the nudification ban is really about

This is not just a content-labeling issue. It is a prohibited-practices issue. A deployer does not need to operate a consumer "nudification app" to have review work to do.

Risk bucketPractical review question
Fake intimate image or video generationCan the system generate or transform content that creates intimate synthetic media of real people without consent?
Workflow enablementDo templates, shortcuts, plugins, or editing tools make prohibited generation materially easier?
Child sexual abuse materialDo safety controls specifically block child sexual abuse material and ambiguous age-related abuse patterns?
Third-party model exposureCould imported models, wrappers, extensions, or APIs bypass the organisation's normal controls?

What deployers should review now

Vendor due diligence checklist for AI image-generation tools showing safety controls abuse reporting evidence retention and prohibited-use review
Vendor due diligence checklist for image-generation tools, safety controls, abuse reporting, and retained evidence.

Vendor questions to ask

Evidence to retain

Evidence artifactWhy it matters
Feature inventory entryShows the system, owner, purpose, users, data categories, and content-generation capability.
Vendor questionnaire responseRecords what the supplier said about model capability, restrictions, and safety controls.
Safety-control summaryDocuments prompt filtering, upload checks, moderation, abuse reporting, and escalation routes.
Approval or rejection noteShows whether the feature was approved, restricted, disabled, or escalated for legal review.
Regulatory status noteSeparates current-law baseline decisions from Digital Omnibus provisional-agreement watch items.

Common mistakes

The biggest mistake: treating this as a consumer-app issue only. Enterprise teams often use image, avatar, video, design, HR, marketing, and support tools without checking the underlying model capability or plugin path.

What to do next

If a system touches image generation, transformation, face editing, avatar creation, or visual manipulation, review it now. Update the AI system inventory, run vendor due diligence, confirm prohibited-practice exposure, and keep an evidence trail that records what was checked and when.

FAQ

Direct answers on the provisional nudification-ban track, deployer review duties, vendor checks, and evidence retention.

No. The EU AI Act nudification ban is part of the 7 May 2026 Digital Omnibus political agreement. It still requires formal adoption and publication before it changes the legal text. Treat it as a planning and evidence-review trigger, not as final enacted law.

The provisional nudification ban targets AI practices involving the generation of non-consensual sexual or intimate content and AI-created child sexual abuse material. The final legal wording and numbering should be checked after formal adoption, because the current published AI Act baseline remains Regulation (EU) 2024/1689.

Deployers still choose vendors, activate features, approve workflows, and expose users to system outputs. A deployer evidence file should record vendor capability checks, safety controls, acceptable-use restrictions, escalation paths, and the decision note explaining whether a feature was approved, restricted, or rejected.

No. The nudification-ban review is not limited to consumer photo apps. Enterprise image, video, avatar, marketing, HR, customer-support, or design workflows can raise review questions if third-party models, plugins, or editing features could enable intimate synthetic-content generation.

An organisation should keep the use-case review, vendor questionnaire, safety-control summary, approval or rejection note, escalation path, and dated legal-status record. The legal-status record should separate current-law baseline decisions from Digital Omnibus provisional-agreement watch items.

Related EU AI Compass resources