EU AI Act Guide

High-Risk AI Systems

The Complete Guide to EU AI Act High-Risk Requirements

Key Deadline: August 2, 2026

Overview

High-risk AI systems face the most stringent requirements under the EU AI Act. If your AI system falls into this category, you'll need to comply with Articles 9-15 and undergo conformity assessment before placing it on the EU market.

Key Deadline

High-risk AI systems must comply by August 2, 2026.

What Makes AI "High-Risk"?

AI systems are classified as high-risk through two pathways:

Pathway 1: Safety Components (Annex I)

AI systems that are safety components of products covered by EU harmonization legislation:

  • Medical devices
  • Machinery
  • Toys
  • Lifts
  • Radio equipment
  • Civil aviation
  • Motor vehicles
  • Railway systems
  • Marine equipment

Example: An AI system that controls braking in an autonomous vehicle is a safety component of that vehicle.

Pathway 2: Standalone High-Risk (Annex III)

AI systems in specific use cases that pose risks to safety or fundamental rights.

Annex III Categories

Biometrics

Remote biometric identification, categorization, emotion recognition

Critical Infrastructure

Safety components of water, gas, electricity, traffic management

Education

Admissions decisions, assessments, proctoring, cheating detection

Employment

Recruitment, screening, hiring, task allocation, performance evaluation, termination

Essential Services

Credit scoring, emergency services dispatch, health/life insurance assessment

Law Enforcement

Risk assessment, polygraphs, evidence evaluation, crime prediction

Migration

Document verification, visa applications, residence permits

Justice

Sentencing, legal research, case outcome prediction

Elections

Influencing voting behavior (with exceptions for non-manipulative uses)

Annex III Categories Deep Dive

1. Biometrics

High-risk if used for:

  • Remote biometric identification — Identifying individuals from a distance using biometric data
  • Biometric categorization — Categorizing individuals by protected characteristics
  • Emotion recognition — Inferring emotional states in workplace or educational settings

Examples:

  • Facial recognition for access control
  • Voice recognition for identity verification
  • Systems inferring mood from facial expressions

Not high-risk:

  • Biometric verification (one-to-one matching for authentication)
  • Systems designed for accessibility purposes

3. Education and Vocational Training

High-risk if used for:

  • Determine access — Admissions decisions, assignment to programs
  • Evaluate outcomes — Grading, assessment scoring
  • Monitor students — Cheating detection, exam proctoring
  • Predict success — Dropout prediction, performance forecasting

Examples:

  • AI scoring college applications
  • Automated essay grading
  • Remote proctoring software
  • Systems predicting student dropout risk

4. Employment

This is one of the most common high-risk categories for AI startups.

High-risk if used for:

  • Recruitment — Posting ads, screening applications
  • Hiring — Evaluating candidates, ranking applicants
  • Task allocation — Assigning work, scheduling
  • Performance — Evaluating worker performance
  • Termination — Decisions about ending employment

Examples:

  • Resume screening tools
  • AI-powered interview analysis
  • Algorithmic scheduling systems
  • Productivity monitoring tools

5. Access to Essential Services

High-risk if used for:

  • Credit — Credit scoring, loan decisions
  • Public benefits — Welfare, unemployment, housing
  • Emergency services — Prioritizing emergency calls
  • Insurance — Health and life insurance pricing/coverage

Examples:

  • AI credit scoring models
  • Fraud detection in insurance claims
  • Emergency dispatch prioritization
  • Benefits eligibility assessment

Requirements for High-Risk AI (Articles 9-15)

Once classified as high-risk, you must comply with these requirements:

Article 9

Risk Management System

Establish a continuous risk management process

RequirementDescription
Risk identificationIdentify known and foreseeable risks
Risk estimationEvaluate risks from intended use and misuse
Risk mitigationImplement measures to address risks
Residual riskCommunicate remaining risks to users
TestingVerify risk management effectiveness
DocumentationDocument the entire process

Key outputs:

  • Risk assessment document
  • Mitigation measures list
  • Residual risk communication
  • Testing procedures and results
Article 10

Data and Data Governance

Ensure training data quality

RequirementDescription
Governance practicesDocumented data management
Data qualityRelevant, representative, error-free
Statistical propertiesAppropriate for intended purpose
Bias assessmentIdentify and address biases
Gap analysisDocument data shortcomings
GDPR compliancePersonal data processed lawfully

Key outputs:

  • Data governance documentation
  • Data quality assessment
  • Bias testing results
  • Privacy compliance evidence
Article 11

Technical Documentation

Prepare comprehensive documentation

RequirementDescription
General descriptionSystem purpose, functionality
DevelopmentDesign, architecture, algorithms
MonitoringHow system is monitored and controlled
Risk managementRisk assessment and mitigation
ChangesModifications throughout lifecycle
PerformanceAccuracy, robustness metrics

Key outputs:

  • Technical documentation package
  • System architecture description
  • Performance specifications
Article 12

Record-Keeping

Implement automatic logging

Detailed Guide →
RequirementDescription
AutomaticNo manual intervention required
Duration trackingLog operational periods
Database accessLog reference database queries
Input dataLog or enable reconstruction of inputs
TraceabilityEnable full decision reconstruction

Key outputs:

  • Logging implementation
  • Audit trail capability
  • Export functionality
Article 13

Transparency and Information

Provide clear information to deployers

RequirementDescription
Instructions for useHow to properly use the system
CapabilitiesWhat the system can do
LimitationsWhat the system cannot do
PerformanceAccuracy and error rates
OversightHow to implement human oversight
LifetimeExpected operational lifetime

Key outputs:

  • Instructions for use document
  • Performance specifications
  • User guidance materials
Article 14

Human Oversight

Enable effective human oversight

RequirementDescription
UnderstandingHumans can understand system behavior
AwarenessHumans aware of automation bias risk
InterpretationHumans can correctly interpret output
OverrideHumans can decide not to use or override
InterventionHumans can stop the system

Key outputs:

  • Human oversight procedures
  • Training materials for operators
  • Override and intervention mechanisms
Article 15

Accuracy, Robustness, Cybersecurity

Ensure technical quality

RequirementDescription
AccuracyAppropriate for intended purpose
RobustnessResilient to errors and attacks
RedundancyFail-safe measures where appropriate
CybersecurityProtected against threats

Key outputs:

  • Accuracy metrics and testing
  • Robustness evaluation
  • Security assessment

Conformity Assessment

What It Is

Before placing a high-risk AI system on the EU market, you must demonstrate compliance through conformity assessment.

Two Pathways

PathwayWhen It AppliesProcess
Internal controlMost Annex III systemsSelf-assessment + documentation
Third-party assessmentBiometric identification, critical infrastructureNotified body involvement

Internal Control (Most Common)

For most high-risk AI systems:

  1. Implement all Article 9-15 requirements
  2. Document compliance in technical documentation
  3. Establish quality management system
  4. Prepare EU Declaration of Conformity
  5. Affix CE marking
  6. Register in EU database

Third-Party Assessment

For biometric ID and critical infrastructure:

  1. Implement all Article 9-15 requirements
  2. Engage a notified body
  3. Submit technical documentation for review
  4. Address any findings
  5. Obtain conformity certificate
  6. Affix CE marking
  7. Register in EU database

Implementation Timeline

Now → August 2026

PhaseTimelineActivities
AssessmentNowClassify systems, identify gaps
PlanningQ1 2026Resource allocation, project planning
ImplementationQ2-Q3 2026Build compliance, create documentation
TestingQ3 2026Verify conformity, address gaps
CertificationQ3 2026Conformity assessment, registration
ComplianceAug 2, 2026Full compliance required

Recommended 6-Month Plan

Months 1-2: Foundation

  • Complete risk classification for all AI systems
  • Identify which systems are high-risk
  • Gap analysis against Articles 9-15
  • Resource and budget planning

Months 3-4: Core Implementation

  • Implement risk management system
  • Document data governance
  • Create technical documentation
  • Implement logging capabilities

Months 5-6: Finalization

  • Complete all documentation
  • Conduct conformity assessment
  • Register in EU database
  • Train team on ongoing compliance

Common Mistakes

1. Waiting Too Long

"We'll deal with it when enforcement starts."

Reality: Implementation takes 6+ months. Starting in 2026 is too late.

2. Underestimating Scope

"We only have a few AI features."

Reality: Every AI system needs classification. Many companies discover more high-risk systems than expected.

3. Documentation-Only Approach

"We'll just create the documents."

Reality: Documentation must reflect real processes. Auditors will verify implementation.

4. Ignoring Continuous Compliance

"We're compliant now, we're done."

Reality: Risk management is continuous. Documentation needs updates. Logs need monitoring.

5. DIY for Complex Systems

"We can figure this out ourselves."

Reality: High-risk compliance is complex. Professional tools and/or expert guidance often pay for themselves.

Penalties for Non-Compliance

ViolationMaximum Penalty
Non-compliance with high-risk requirements€15 million or 3% of global annual turnover
Failure to register in EU database€7.5 million or 1.5% of turnover
Providing incorrect information€7.5 million or 1.5% of turnover

Beyond fines:

  • Market withdrawal orders
  • Recall of non-compliant systems
  • Prohibition of market placement
  • Reputational damage

How Protectron Helps

Risk Classification

Instantly determine if your AI is high-risk. Understand which Annex III category applies.

Requirement Tracking

Track all 113 high-risk requirements. See progress across Articles 9-15.

Document Generation

Generate required documentation with AI. Technical documentation templates.

Audit Trail (Article 12)

Automatic logging via SDK. LangChain, CrewAI integration.

Evidence Management

Centralized evidence repository. Link evidence to requirements.

Compliance Reporting

Real-time compliance score. Progress dashboards.

Get Started

High-risk compliance is complex, but it's manageable with the right tools.

Classify your AI systems, track requirements, and generate documentation—all in one platform.

Questions about high-risk compliance? Contact us for a personalized assessment.