Free Tools | Legal Classification | 2 Min Completion

The "Accidental Provider" Classifier

TARGET: CISOs, CTOs & LEGAL COUNSEL EXECUTION: 100% LOCAL BROWSER

The fastest way to decimate your corporate compliance budget is to accidentally become an AI "Provider."

Under the EU AI Act, Deployers (companies using AI) have relatively light obligations. These include providing human oversight, keeping local logs, and maintaining data governance.

But Providers (companies building AI) must endure crushing regulatory burdens. This includes third-party conformity assessments, strict quality management systems, CE marking, and public EU database registration.

The trap lies in Article 25 and 26(8). If your engineering or marketing teams modify a third-party AI tool too heavily, the law reclassifies your company from Deployer to Provider.

The same risk applies if you simply slap your corporate logo on a white-labeled product. You instantly inherit the manufacturer's liability.

The Operational Analogy

Think of AI like a commercial van. If you buy a van to deliver packages, you are the Deployer. You must obey speed limits and provide human oversight.

But if you take that van into your garage, rip out the seats, and weld in a chassis to turn it into a school bus, you have made a substantial modification.

The law says you are now a Manufacturer. You must pass crash tests before it hits the road.

Conceptual illustration of a third-party AI model being stamped with a corporate logo, triggering a red regulatory liability warning ring

Audit Your AI Modifications

Use the logic tree below to assess if your internal projects have crossed the legal threshold into Provider territory.

Privacy By Design: This runs entirely in your browser on your device. We don't track your answers, and nothing gets sent back to us.

1. The Trademark Trigger

Are you placing your company's name or trademark on a high-risk AI system that is already on the market? (e.g., buying a white-label HR screening AI and branding it entirely as your own proprietary tool).

2. The Substantial Modification Trigger (Fine-Tuning)

Have your engineers made a "substantial modification" to a third-party high-risk AI system?

This explicitly includes fine-tuning an open-source model on your proprietary corporate data, or adjusting the algorithmic weighting criteria of an existing screening tool.

Security Note: What you click stays on your machine. We don't transmit, sync, or store a single byte of this assessment.

3. The Intended Purpose Trigger

Are you using a general AI system for a high-risk Annex III purpose it was not explicitly built for? (e.g., taking an open-source chatbot API and hard-coding it into a tool that automatically rejects loan applications or job candidates).

Data Sovereignty Lock: Your selections and responses stay right here on your screen. We never see them.

Decision flowchart showing how Deployers trigger Provider obligations through substantial modifications

Data Security Note: Your responses stay right here on your screen. We don't transmit, sync, or store your response.


Disclaimer: This logic tree provides structural mapping based on EU AI Act Articles 25 and 26. It does not constitute legal advice. The definition of "substantial modification" requires stringent architectural review. Consult licensed EU regulatory counsel.

Get Your Compliance Toolkit

This tool identifies requirements. Our toolkit gives you the implementation framework — structured templates, NIST crosswalks, and audit-ready documentation.

Also try