Enforcement Active

EU AI Act Penalties: Up to €35 Million in Fines Are Now Enforceable

The grace period is over. Companies using prohibited AI practices face fines up to €35 million or 7% of global annual revenue. Here's what you need to know.

Published: December 202515 min read
High-Risk AI Deadline
228 days until August 2, 2026

As of February 2025, companies using prohibited AI practices face fines up to €35 million or 7% of global annual revenue. And by August 2026, any company with high-risk AI systems that isn't compliant faces penalties up to €15 million.

This isn't a theoretical future risk. Enforcement has begun.

Here's what you need to know about EU AI Act penalties, what triggers them, and how to protect your company.

The Penalty Structure at a Glance

The EU AI Act establishes a three-tier penalty system based on the severity of violations:

€35M
or 7% of revenue
Prohibited AI practices
€15M
or 3% of revenue
High-risk non-compliance
€7.5M
or 1.5% of revenue
False information

The penalty is always calculated as whichever amount is higher — the fixed sum or the revenue percentage.

Example Calculations

Company with €500M annual revenue:

  • Prohibited AI violation: Up to €35 million
  • High-risk non-compliance: Up to €15 million

Company with €1B annual revenue:

  • Prohibited AI violation: Up to €70 million (7% exceeds fixed)
  • High-risk non-compliance: Up to €30 million (3% exceeds fixed)

The math gets painful very quickly for larger organizations.

What Triggers the Maximum Penalties?

€35 Million: Prohibited AI Practices

The harshest penalties are reserved for AI systems that should never have been deployed in the first place.

You face maximum penalties if you:

  • Deploy social scoring systems that evaluate people based on social behavior for detrimental treatment
  • Use real-time biometric identification (facial recognition) in public spaces without authorization
  • Implement emotion recognition in workplaces or educational settings
  • Build AI designed to manipulate vulnerable groups like children or elderly
  • Create biometric categorization systems that infer sensitive attributes (race, religion, sexual orientation)
  • Use predictive policing that assesses crime probability based on personal profiling
  • Scrape facial images from the internet or CCTV to build recognition databases

These practices were banned on February 2, 2025

There is no compliance pathway — they are simply prohibited.

Real-world implications

If your HR department uses an AI tool that detects employee emotions to assess "engagement" or "productivity," you may already be in violation. If your security system uses real-time facial recognition without explicit legal authorization, you're exposed.

€15 Million: High-Risk Non-Compliance

The second tier of penalties applies to high-risk AI systems that fail to meet the comprehensive requirements under Articles 9-15.

You face these penalties if your high-risk AI:

  • Lacks a documented risk management system
  • Has no technical documentation
  • Doesn't maintain proper records and logs
  • Fails to provide transparency information to users
  • Has no human oversight measures
  • Hasn't undergone conformity assessment
  • Isn't registered in the EU database (when required)
  • Doesn't meet accuracy and robustness standards

High-risk AI includes systems used for:

  • Recruitment and hiring decisions
  • Credit scoring and lending
  • Insurance pricing and claims
  • Educational admissions and grading
  • Healthcare diagnosis and treatment recommendations
  • Law enforcement and legal proceedings
  • Border control and immigration

Deadline: August 2, 2026

The deadline for full high-risk compliance is August 2, 2026.

€7.5 Million: Providing False Information

The "lightest" penalty tier still carries significant fines for:

  • Providing incorrect, incomplete, or misleading information to authorities
  • Failing to cooperate with regulatory requests
  • Submitting false documentation during conformity assessments

How Fines Are Calculated

The EU AI Act provides guidance on how authorities should determine actual penalty amounts within these maximums.

Factors that increase penalties:

  • Intentional or negligent violation
  • Previous violations of the AI Act
  • Duration of the violation
  • Number of people affected
  • Severity of harm caused
  • Lack of cooperation with authorities
  • Financial benefits gained from the violation

Factors that may reduce penalties:

  • Proactive remediation efforts
  • First-time offense
  • Good faith attempts at compliance
  • Cooperation with investigators
  • Limited scope of violation
  • Voluntary disclosure of issues

SME Considerations

For small and medium enterprises (and startups), the regulation calls for "proportionate" penalties. However, this doesn't mean immunity — it means regulators should consider company size when setting fines within the allowable range. A €1 million fine might be proportionate for a startup, while €35 million would be reserved for larger offenders. But €1 million is still company-ending for many startups.

Who Enforces the EU AI Act?

Enforcement happens at multiple levels:

National Authorities

Each EU member state designates national competent authorities to enforce the AI Act within their borders. These authorities can:

  • Conduct investigations and audits
  • Request documentation and information
  • Issue compliance orders
  • Impose fines and penalties
  • Order AI systems to be withdrawn from the market

The AI Office

The European Commission's AI Office coordinates enforcement across member states and directly oversees:

  • General-purpose AI (GPAI) model compliance
  • Cross-border enforcement coordination
  • Development of codes of practice
  • Guidance and interpretation

Market Surveillance

Market surveillance authorities monitor AI systems in the market and can:

  • Request access to AI systems for testing
  • Order recalls or withdrawals
  • Issue public warnings about non-compliant systems

Enforcement Has Already Begun

While we haven't yet seen major EU AI Act fines make headlines, enforcement infrastructure is now active:

What's happening now:

  • National authorities are being designated across EU member states
  • The AI Office is operational and issuing guidance
  • Prohibited practices are being monitored
  • Complaints mechanisms are being established

What to expect:

Based on GDPR enforcement patterns, we can anticipate:

  • Initial focus on clear-cut violations (prohibited AI)
  • High-profile enforcement actions to establish precedent
  • Gradual increase in enforcement intensity
  • Cross-border coordination on major cases

Companies Most at Risk

The companies most at risk are those with obvious violations — social scoring, unauthorized biometric systems, or workplace emotion recognition that haven't been discontinued.

Beyond Fines: Other Consequences

Financial penalties aren't the only risk. Non-compliance can trigger:

Market Withdrawal Orders

Authorities can order non-compliant AI systems to be removed from the EU market entirely. For companies dependent on EU customers, this is existential.

Reputational Damage

Enforcement actions are public. Being known as a company that violated AI regulations damages trust with customers, partners, and investors.

Contract Terminations

Enterprise customers increasingly include AI compliance requirements in contracts. Non-compliance can trigger termination clauses and loss of major accounts.

Insurance & Liability

D&O insurance and cyber insurance policies may not cover penalties from willful non-compliance. Executives could face personal liability.

How to Protect Your Company

Avoiding EU AI Act penalties requires proactive compliance. Here's what to do:

Immediate Actions (This Week)

  • Audit for prohibited AI — Check if any of your systems fall into the banned categories. If they do, discontinue immediately.
  • Inventory your AI systems — Create a complete list of all AI you develop, deploy, or use.
  • Initial classification — Determine which systems are high-risk, limited-risk, or minimal-risk.

Short-Term Actions (This Month)

  • Risk assessment — For high-risk systems, begin documenting risks and mitigation measures.
  • Gap analysis — Compare your current practices against EU AI Act requirements.
  • Compliance roadmap — Create a timeline to address gaps before the August 2026 deadline.

Ongoing Actions

  • Documentation — Build and maintain required technical documentation.
  • Implement controls — Establish human oversight, logging, and monitoring.
  • Prepare for conformity assessment — Gather evidence you'll need for self-assessment or third-party audit.
  • Stay informed — Monitor regulatory guidance and update practices accordingly.

The Cost of Compliance vs. Non-Compliance

Let's be direct about the math:

Cost of Proactive Compliance

  • • Compliance software: €1,000-12,000/year
  • • Documentation effort: Weeks of internal work
  • • Process changes: Manageable operational adjustments

Cost of Non-Compliance

  • • Fines: Up to €35 million
  • • Market withdrawal: Loss of EU revenue
  • • Reputation damage: Incalculable
  • • Legal fees: Hundreds of thousands
  • • Business disruption: Months of firefighting

The choice is obvious. Companies that invest in compliance now are buying insurance against catastrophic outcomes.

Common Mistakes That Lead to Penalties

Based on how similar regulations have been enforced, here are the mistakes most likely to trigger EU AI Act penalties:

1. Assuming "We're Not in the EU"

If you have EU customers, users, or your AI affects EU residents, you're in scope. Location of headquarters is irrelevant.

2. Ignoring the Prohibited List

Some companies have deployed emotion recognition or social scoring features without realizing they're banned. Ignorance isn't a defense.

3. Misclassifying Risk Levels

Incorrectly classifying a high-risk system as minimal-risk to avoid compliance requirements will be treated harshly when discovered.

4. Documentation Gaps

"We do all the right things but didn't document them" won't protect you. The EU AI Act requires written evidence of compliance.

5. Waiting Until the Deadline

Compliance requires significant preparation. Starting in July 2026 for an August 2026 deadline is a recipe for failure — and penalties.

6. Assuming Third-Party AI Is Someone Else's Problem

Using OpenAI or AWS doesn't transfer your compliance obligations. Deployers have their own requirements.

Timeline to Penalty Exposure

Here's when different penalties become enforceable:

DateWhat Becomes Enforceable
February 2, 2025✅ Prohibited AI penalties (€35M tier) — NOW ACTIVE
August 2, 2025✅ GPAI non-compliance penalties — NOW ACTIVE
August 2, 2026⚠️ High-risk AI penalties (€15M tier) — 228 DAYS AWAY
August 2, 2027High-risk AI in regulated products

The Clock is Ticking

Every day without action is a day closer to potential penalty exposure.

Take Action Today

The EU AI Act is the most significant AI regulation in the world, and its penalties are designed to be taken seriously. Companies that act now will be prepared. Companies that delay will scramble.

Get a personalized compliance roadmap in minutes.

Disclaimer: This article is for informational purposes only and does not constitute legal advice. For guidance specific to your situation, consult qualified legal counsel.