Understanding Your Obligations Under the EU AI Act
General Purpose AI (GPAI) models are AI models that can perform a wide variety of tasks, regardless of how they're placed on the market.
"A general-purpose AI model means an AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks."
| Model | Provider | Type |
|---|---|---|
| GPT-5.2, GPT-5 | OpenAI | Foundation LLM |
| Claude 4.5 Opus, Claude 4.5 Sonnet | Anthropic | Foundation LLM |
| Gemini 3.0 Pro, Gemini 3.0 Ultra | Multimodal Foundation | |
| Llama 4, Llama 3.3 | Meta | Open-source LLM |
| Mistral Large 3 | Mistral AI | Foundation LLM |
| Command R+ | Cohere | Foundation LLM |
A model is considered GPAI if it:
You're a GPAI Provider if you:
Examples: OpenAI, Anthropic, Google, Meta, Mistral
You're a GPAI Deployer if you:
Examples: SaaS companies using GPT-5.2 API, startups building on Claude
Important
Deployers have different (generally lighter) obligations than providers. If you're using GPT-5.2 via API, you're a deployer, not a provider.
GPAI obligations took effect on August 2, 2025.
| Date | Milestone |
|---|---|
| ✅ Aug 2, 2025 | GPAI transparency requirements in effect |
| ✅ Aug 2, 2025 | Systemic risk requirements in effect |
| 🔄 Ongoing | Code of Practice development |
| 📋 Future | Harmonized standards publication |
Prepare and maintain comprehensive documentation covering:
Model Information
Training Details
Performance
If others build AI systems using your GPAI model, provide:
Establish and implement a policy to respect EU copyright law:
Publish a sufficiently detailed summary of training content:
Template approach: "The model was trained on a diverse dataset including [categories: web pages, books, code repositories, etc.] spanning [languages] and [domains]. Data was collected from [source types] with [preprocessing steps]. Training data covers the period [date range]."
A GPAI model poses systemic risk if it has high-impact capabilities that could affect:
A model is automatically classified as systemic risk if the cumulative compute used for training exceeds 10^25 FLOPs (floating point operations).
| Model | Provider | Likely Classification |
|---|---|---|
| GPT-5.2 | OpenAI | Systemic Risk |
| Claude 4.5 Opus | Anthropic | Systemic Risk |
| Gemini 3.0 Ultra | Systemic Risk | |
| Llama 4 405B | Meta | Systemic Risk |
Beyond the base GPAI requirements:
Perform adversarial testing and red-teaming
Assess and address systemic risks
Track and report serious incidents
Ensure adequate cybersecurity
Document environmental impact
If you use GPAI models (not provide them), your obligations are different:
If you integrate GPAI into your own AI system:
If you build a high-risk AI system using GPAI:
Example: You build a hiring tool using GPT-5.2. Even though GPT-5.2 is a GPAI model, your hiring tool is a high-risk AI system. You're responsible for high-risk compliance.
Documentation Checklist:
Integration Checklist:
Do GPAI requirements apply to my startup?
If you're using APIs (deployer): Generally, you have lighter obligations focused on transparency and understanding what you're building with. If you're training/providing models (provider): Yes, GPAI requirements apply.
Is GPT-5.2/Claude considered systemic risk?
Based on training compute estimates, frontier models from major labs likely exceed the 10^25 FLOP threshold. However, official classifications depend on provider disclosures to the AI Office.
What if I fine-tune an open-source model?
If you fine-tune and redistribute, you may become a GPAI provider for that model. Documentation obligations transfer to you. If you fine-tune for internal use only, you're generally treated as a deployer.
How does this affect my high-risk AI system?
If you build a high-risk AI system using GPAI, the GPAI provider gives you information, but you're responsible for full high-risk compliance. Provider obligations are separate from your obligations.
What about open-source models?
Open-source GPAI providers have the same obligations, with some accommodations. They may have simplified documentation requirements, and the community can contribute to compliance.
GPAI requirements are evolving as the Code of Practice develops and the AI Office issues guidance. Subscribe to our newsletter for updates.
Building on GPAI models? Start your free trial to track your compliance obligations.