The Complete Guide to EU AI Act High-Risk Requirements
High-risk AI systems face the most stringent requirements under the EU AI Act. If your AI system falls into this category, you'll need to comply with Articles 9-15 and undergo conformity assessment before placing it on the EU market.
Key Deadline
High-risk AI systems must comply by August 2, 2026.
AI systems are classified as high-risk through two pathways:
AI systems that are safety components of products covered by EU harmonization legislation:
Example: An AI system that controls braking in an autonomous vehicle is a safety component of that vehicle.
AI systems in specific use cases that pose risks to safety or fundamental rights.
Remote biometric identification, categorization, emotion recognition
Safety components of water, gas, electricity, traffic management
Admissions decisions, assessments, proctoring, cheating detection
Recruitment, screening, hiring, task allocation, performance evaluation, termination
Credit scoring, emergency services dispatch, health/life insurance assessment
Risk assessment, polygraphs, evidence evaluation, crime prediction
Document verification, visa applications, residence permits
Sentencing, legal research, case outcome prediction
Influencing voting behavior (with exceptions for non-manipulative uses)
High-risk if used for:
Examples:
Not high-risk:
High-risk if used for:
Examples:
This is one of the most common high-risk categories for AI startups.
High-risk if used for:
Examples:
High-risk if used for:
Examples:
Once classified as high-risk, you must comply with these requirements:
Establish a continuous risk management process
| Requirement | Description |
|---|---|
| Risk identification | Identify known and foreseeable risks |
| Risk estimation | Evaluate risks from intended use and misuse |
| Risk mitigation | Implement measures to address risks |
| Residual risk | Communicate remaining risks to users |
| Testing | Verify risk management effectiveness |
| Documentation | Document the entire process |
Key outputs:
Ensure training data quality
| Requirement | Description |
|---|---|
| Governance practices | Documented data management |
| Data quality | Relevant, representative, error-free |
| Statistical properties | Appropriate for intended purpose |
| Bias assessment | Identify and address biases |
| Gap analysis | Document data shortcomings |
| GDPR compliance | Personal data processed lawfully |
Key outputs:
Prepare comprehensive documentation
| Requirement | Description |
|---|---|
| General description | System purpose, functionality |
| Development | Design, architecture, algorithms |
| Monitoring | How system is monitored and controlled |
| Risk management | Risk assessment and mitigation |
| Changes | Modifications throughout lifecycle |
| Performance | Accuracy, robustness metrics |
Key outputs:
Implement automatic logging
| Requirement | Description |
|---|---|
| Automatic | No manual intervention required |
| Duration tracking | Log operational periods |
| Database access | Log reference database queries |
| Input data | Log or enable reconstruction of inputs |
| Traceability | Enable full decision reconstruction |
Key outputs:
Provide clear information to deployers
| Requirement | Description |
|---|---|
| Instructions for use | How to properly use the system |
| Capabilities | What the system can do |
| Limitations | What the system cannot do |
| Performance | Accuracy and error rates |
| Oversight | How to implement human oversight |
| Lifetime | Expected operational lifetime |
Key outputs:
Enable effective human oversight
| Requirement | Description |
|---|---|
| Understanding | Humans can understand system behavior |
| Awareness | Humans aware of automation bias risk |
| Interpretation | Humans can correctly interpret output |
| Override | Humans can decide not to use or override |
| Intervention | Humans can stop the system |
Key outputs:
Ensure technical quality
| Requirement | Description |
|---|---|
| Accuracy | Appropriate for intended purpose |
| Robustness | Resilient to errors and attacks |
| Redundancy | Fail-safe measures where appropriate |
| Cybersecurity | Protected against threats |
Key outputs:
Before placing a high-risk AI system on the EU market, you must demonstrate compliance through conformity assessment.
| Pathway | When It Applies | Process |
|---|---|---|
| Internal control | Most Annex III systems | Self-assessment + documentation |
| Third-party assessment | Biometric identification, critical infrastructure | Notified body involvement |
For most high-risk AI systems:
For biometric ID and critical infrastructure:
| Phase | Timeline | Activities |
|---|---|---|
| Assessment | Now | Classify systems, identify gaps |
| Planning | Q1 2026 | Resource allocation, project planning |
| Implementation | Q2-Q3 2026 | Build compliance, create documentation |
| Testing | Q3 2026 | Verify conformity, address gaps |
| Certification | Q3 2026 | Conformity assessment, registration |
| Compliance | Aug 2, 2026 | Full compliance required |
"We'll deal with it when enforcement starts."
Reality: Implementation takes 6+ months. Starting in 2026 is too late.
"We only have a few AI features."
Reality: Every AI system needs classification. Many companies discover more high-risk systems than expected.
"We'll just create the documents."
Reality: Documentation must reflect real processes. Auditors will verify implementation.
"We're compliant now, we're done."
Reality: Risk management is continuous. Documentation needs updates. Logs need monitoring.
"We can figure this out ourselves."
Reality: High-risk compliance is complex. Professional tools and/or expert guidance often pay for themselves.
| Violation | Maximum Penalty |
|---|---|
| Non-compliance with high-risk requirements | €15 million or 3% of global annual turnover |
| Failure to register in EU database | €7.5 million or 1.5% of turnover |
| Providing incorrect information | €7.5 million or 1.5% of turnover |
Beyond fines:
Instantly determine if your AI is high-risk. Understand which Annex III category applies.
Track all 113 high-risk requirements. See progress across Articles 9-15.
Generate required documentation with AI. Technical documentation templates.
Automatic logging via SDK. LangChain, CrewAI integration.
Centralized evidence repository. Link evidence to requirements.
Real-time compliance score. Progress dashboards.
High-risk compliance is complex, but it's manageable with the right tools.
Classify your AI systems, track requirements, and generate documentation—all in one platform.
Questions about high-risk compliance? Contact us for a personalized assessment.