The Complete Guide to EU AI Act Logging Requirements
Article 12 of the EU AI Act establishes record-keeping requirements for high-risk AI systems. It mandates automatic logging capabilities that enable traceability throughout the AI system's lifecycle.
Key Principle
High-risk AI systems must be designed to automatically record events relevant to identifying risks and enabling post-market monitoring.
Article 12 isn't optional for high-risk AI systems. Failure to implement adequate logging can result in:
Good logging practices provide business value:
Debugging
Understand what went wrong when issues arise
Improvement
Analyze patterns to improve AI performance
Trust
Demonstrate to customers how your AI makes decisions
Liability Protection
Evidence in case of disputes
"High-risk AI systems shall technically allow for the automatic recording of events ('logs') over the duration of the lifetime of the system."
Article 12 specifies that logging capabilities must enable:
Logs must capture when the AI system is being used—the start and end of each use period.
# Log session start
protectron.log_event("session_start", {
"session_id": "sess_123",
"user_id": "usr_456",
"timestamp": "2025-12-16T10:00:00Z"
})
# Log session end
protectron.log_event("session_end", {
"session_id": "sess_123",
"duration_seconds": 3600
})If the AI system uses reference databases (for matching or comparison), access to these databases must be logged.
Examples: Biometric systems, hiring AI, credit scoring systems
protectron.log_event("database_access", {
"database": "candidate_profiles",
"query_type": "match",
"records_accessed": 150,
"purpose": "candidate_screening"
})The input data that led to a match or decision must be logged—or at minimum, the ability to reconstruct this data must exist.
protectron.log_event("llm_call", {
"input_hash": "sha256:abc123...", # Hash if PII concerns
"input_summary": "Customer inquiry about refund",
"input_tokens": 145
})Logs must enable tracing the AI system's operation for:
The key word in Article 12 is automatic. Manual logging is not sufficient. The system must:
For each logged event, capture:
| Element | Description | Example |
|---|---|---|
| Timestamp | When the event occurred | 2025-12-16T10:23:45.123Z |
| Event Type | What type of event | llm_call, tool_use, decision |
| System ID | Which AI system | customer-support-agent |
| Session/Trace | Grouping identifier | trc_abc123 |
| Input Reference | What triggered the event | Hash, summary, or full data |
| Output Reference | What the system produced | Decision, response, action |
| Metadata | Additional context | User ID, environment, version |
The regulation requires logs to be maintained for a period "appropriate to the intended purpose" of the AI system, and at minimum:
Recommendation: Retain logs for at least 7 years, or as long as the AI system is deployed plus 3 years.
Logs must be:
Tamper-evident
Changes should be detectable
Immutable
Historical logs not modifiable
Secure
Protected from unauthorized access
# Every LLM call
{
"event_type": "llm_call",
"model": "gpt-5.2",
"provider": "openai",
"input_messages": [...], # Or hash/summary
"output_message": "...",
"tokens_input": 150,
"tokens_output": 89,
"latency_ms": 1234,
"temperature": 0.7,
"timestamp": "2025-12-16T10:23:45.123Z"
}# Agent decisions
{
"event_type": "agent_action",
"agent_id": "support-agent",
"action": "escalate_to_human",
"reasoning": "Customer expressed frustration...",
"confidence": 0.85,
"alternatives_considered": ["continue_conversation", "offer_refund"]
}
# Tool usage
{
"event_type": "tool_call",
"tool_name": "search_knowledge_base",
"input": {"query": "refund policy"},
"output": {"results": 3, "top_result": "..."},
"latency_ms": 234
}# Human approval
{
"event_type": "human_approval",
"action_type": "high_value_refund",
"ai_recommendation": "approve_refund",
"human_decision": "approved",
"reviewer_id": "supervisor_123",
"review_time_seconds": 45
}
# Human override
{
"event_type": "human_override",
"ai_recommendation": "deny_claim",
"human_decision": "approve_claim",
"override_reason": "Extenuating circumstances",
"reviewer_id": "supervisor_123"
}Article 12 must be implemented alongside GDPR. This means:
Options for handling personal data in logs:
| Approach | Description | Trade-offs |
|---|---|---|
| Full logging | Log complete data | Maximum traceability, highest privacy risk |
| Pseudonymization | Replace identifiers with tokens | Good traceability, reduced risk |
| Hashing | One-way hash of sensitive data | Verification possible, not reversible |
| Redaction | Remove sensitive data | Lowest risk, reduced traceability |
| Summarization | Log summaries instead of raw data | Balance of utility and privacy |
callback = ProtectronCallback(
system_id="my-agent",
pii_redaction=True, # Auto-redact emails, phones, etc.
pii_types=["email", "phone", "ssn", "credit_card"],
hash_user_ids=True # Hash user identifiers
)Use this checklist to verify your Article 12 implementation:
The Protectron SDK automatically captures all required events:
from protectron.langchain import ProtectronCallback
# One line to enable Article 12 compliant logging
callback = ProtectronCallback(system_id="my-agent")Every event type required by Article 12:
Does Article 12 apply to my AI system?
Article 12 applies to high-risk AI systems as defined in Annex III. If your AI is used in areas like employment, education, healthcare, or law enforcement, it likely applies. Use our Risk Classification tool to check.
What's the minimum I need to log?
At minimum: timestamps, what inputs were received, what outputs were produced, and any decisions made. More comprehensive logging is recommended for full traceability.
Can I store logs outside the EU?
For EU AI Act compliance, storing logs in the EU is strongly recommended. If you must store elsewhere, ensure appropriate data transfer mechanisms are in place.
How long must I keep logs?
The regulation says "appropriate to the intended purpose" and throughout the system's lifetime. We recommend 7 years minimum, consistent with many regulatory expectations.
What if logging would reveal trade secrets?
You can summarize or hash proprietary information while maintaining traceability. The goal is being able to reconstruct decisions, not expose your algorithms.
Does logging affect performance?
With async logging (like Protectron's SDK), impact is negligible—typically less than 1ms per event.
Article 12 requires high-risk AI systems to have automatic logging that enables traceability. This means capturing:
When
Operational periods and timestamps
What
Inputs, outputs, and decisions
How
The reasoning and process
Who
Human oversight and interventions
Implement logging from day one, and you'll not only comply with Article 12—you'll have better visibility into your AI systems.