EU AI Act Guide

General Purpose AI (GPAI) Requirements

Understanding Your Obligations Under the EU AI Act

What is GPAI?

General Purpose AI (GPAI) models are AI models that can perform a wide variety of tasks, regardless of how they're placed on the market.

Definition from the EU AI Act

"A general-purpose AI model means an AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks."

Examples of GPAI Models

ModelProviderType
GPT-5.2, GPT-5OpenAIFoundation LLM
Claude 4.5 Opus, Claude 4.5 SonnetAnthropicFoundation LLM
Gemini 3.0 Pro, Gemini 3.0 UltraGoogleMultimodal Foundation
Llama 4, Llama 3.3MetaOpen-source LLM
Mistral Large 3Mistral AIFoundation LLM
Command R+CohereFoundation LLM

What Makes a Model "General Purpose"?

A model is considered GPAI if it:

  1. Displays significant generality — Can perform many different types of tasks
  2. Can be integrated into various systems — Not designed for one specific use
  3. Serves multiple purposes — Both direct use and integration into other AI systems

Who Needs to Comply?

GPAI Providers

You're a GPAI Provider if you:

  • Develop and train foundation models
  • Make GPAI models available on the market (including via API)
  • Put your name/trademark on a GPAI model

Examples: OpenAI, Anthropic, Google, Meta, Mistral

GPAI Deployers

You're a GPAI Deployer if you:

  • Build applications on top of GPAI models
  • Integrate GPAI into your products
  • Use GPAI APIs in your services

Examples: SaaS companies using GPT-5.2 API, startups building on Claude

Important

Deployers have different (generally lighter) obligations than providers. If you're using GPT-5.2 via API, you're a deployer, not a provider.

Timeline

GPAI obligations took effect on August 2, 2025.

DateMilestone
Aug 2, 2025GPAI transparency requirements in effect
Aug 2, 2025Systemic risk requirements in effect
🔄 OngoingCode of Practice development
📋 FutureHarmonized standards publication

Requirements for All GPAI Providers

1. Technical Documentation

Prepare and maintain comprehensive documentation covering:

Model Information

  • • Model architecture and design
  • • Training methodology
  • • Data sources and preprocessing
  • • Capabilities and limitations
  • • Intended use cases

Training Details

  • • Computational resources used
  • • Training data characteristics
  • • Fine-tuning and RLHF processes
  • • Evaluation results

Performance

  • • Benchmark results
  • • Known limitations
  • • Failure modes

2. Information for Downstream Providers

If others build AI systems using your GPAI model, provide:

  • • Model capabilities and limitations
  • • Integration guidelines
  • • Known risks and mitigation measures
  • • Information needed for their own compliance

3. Copyright Policy

Establish and implement a policy to respect EU copyright law:

  • • Identify copyrighted content in training data
  • • Implement opt-out mechanisms for rights holders
  • • Document compliance with Text and Data Mining (TDM) provisions
  • • Provide transparency about training data sources

4. Training Content Summary

Publish a sufficiently detailed summary of training content:

  • • General description of training data
  • • Sources and types of data used
  • • Data collection methodology
  • Not required: specific datasets or trade secrets

Template approach: "The model was trained on a diverse dataset including [categories: web pages, books, code repositories, etc.] spanning [languages] and [domains]. Data was collected from [source types] with [preprocessing steps]. Training data covers the period [date range]."

Requirements for GPAI with Systemic Risk

What is Systemic Risk?

A GPAI model poses systemic risk if it has high-impact capabilities that could affect:

  • • Public health or safety
  • • Fundamental rights
  • • Critical infrastructure
  • • Democratic processes

Automatic Systemic Risk Classification

A model is automatically classified as systemic risk if the cumulative compute used for training exceeds 10^25 FLOPs (floating point operations).

Current Models with Systemic Risk

ModelProviderLikely Classification
GPT-5.2OpenAISystemic Risk
Claude 4.5 OpusAnthropicSystemic Risk
Gemini 3.0 UltraGoogleSystemic Risk
Llama 4 405BMetaSystemic Risk

Additional Requirements for Systemic Risk

Beyond the base GPAI requirements:

1. Model Evaluation

Perform adversarial testing and red-teaming

  • Evaluate model against standardized protocols
  • Test for dangerous capabilities
  • Assess dual-use potential
  • Document evaluation methodology and results

2. Risk Assessment and Mitigation

Assess and address systemic risks

  • Identify potential systemic risks
  • Implement mitigation measures
  • Document residual risks
  • Update assessments as model evolves

3. Incident Tracking and Reporting

Track and report serious incidents

  • Monitor for serious incidents
  • Report to EU AI Office without undue delay
  • Include incident details and corrective measures
  • Maintain incident documentation

4. Cybersecurity Protection

Ensure adequate cybersecurity

  • Protect model weights and infrastructure
  • Implement access controls
  • Monitor for unauthorized access or misuse
  • Secure the training pipeline

5. Energy Consumption Reporting

Document environmental impact

  • Report energy consumption during training
  • Estimate inference energy consumption
  • Include in technical documentation

Obligations for Deployers

If you use GPAI models (not provide them), your obligations are different:

When You're Building on GPAI

If you integrate GPAI into your own AI system:

  1. Understand the model — Review provider documentation
  2. Assess your system's risk — Your AI system may be high-risk even if the GPAI isn't
  3. Comply with applicable rules — Based on your system's classification
  4. Transparency — Disclose AI use per limited-risk requirements

When Your System Becomes High-Risk

If you build a high-risk AI system using GPAI:

  • • You're responsible for Articles 9-15 compliance
  • • The GPAI provider should give you needed information
  • • You must ensure the complete system meets requirements

Example: You build a hiring tool using GPT-5.2. Even though GPT-5.2 is a GPAI model, your hiring tool is a high-risk AI system. You're responsible for high-risk compliance.

Practical Implementation

For GPAI Providers

Documentation Checklist:

  • Model architecture documented
  • Training methodology described
  • Training data sources summarized
  • Capabilities and limitations listed
  • Intended use cases specified
  • Benchmark results included
  • Known failure modes documented
  • Integration guidelines provided
  • Copyright policy established
  • Training content summary published

For Systemic Risk Models

  • Adversarial testing performed
  • Red-team evaluations completed
  • Systemic risks assessed
  • Mitigation measures implemented
  • Incident tracking system in place
  • Reporting procedures established
  • Cybersecurity measures implemented
  • Energy consumption documented

For GPAI Deployers

Integration Checklist:

  • Provider documentation reviewed
  • Model limitations understood
  • Own system risk classified
  • Applicable requirements identified
  • Transparency requirements met
  • Human oversight implemented (if needed)
  • Logging enabled (if high-risk)

Common Questions

Do GPAI requirements apply to my startup?

If you're using APIs (deployer): Generally, you have lighter obligations focused on transparency and understanding what you're building with. If you're training/providing models (provider): Yes, GPAI requirements apply.

Is GPT-5.2/Claude considered systemic risk?

Based on training compute estimates, frontier models from major labs likely exceed the 10^25 FLOP threshold. However, official classifications depend on provider disclosures to the AI Office.

What if I fine-tune an open-source model?

If you fine-tune and redistribute, you may become a GPAI provider for that model. Documentation obligations transfer to you. If you fine-tune for internal use only, you're generally treated as a deployer.

How does this affect my high-risk AI system?

If you build a high-risk AI system using GPAI, the GPAI provider gives you information, but you're responsible for full high-risk compliance. Provider obligations are separate from your obligations.

What about open-source models?

Open-source GPAI providers have the same obligations, with some accommodations. They may have simplified documentation requirements, and the community can contribute to compliance.

Key Takeaways

For GPAI Providers

  1. Document everything — Technical docs, training data, capabilities
  2. Support downstream users — Provide information they need
  3. Respect copyright — Implement TDM compliance
  4. Publish training summary — Transparency is mandatory
  5. If systemic risk — Additional evaluation, reporting, security

For GPAI Deployers

  1. Understand what you're using — Review provider documentation
  2. Classify your own system — GPAI use doesn't exempt you
  3. Meet transparency requirements — Disclose AI use appropriately
  4. Maintain evidence — Document your compliance efforts
  5. High-risk is your responsibility — Even when using GPAI

Stay Updated

GPAI requirements are evolving as the Code of Practice develops and the AI Office issues guidance. Subscribe to our newsletter for updates.

Building on GPAI models? Start your free trial to track your compliance obligations.