Add EU AI Act compliance to your AI applications in minutes.
# Three lines to compliance logging
from protectron.langchain import ProtectronCallback
callback = ProtectronCallback(system_id="my-ai-agent")
executor = AgentExecutor(agent=agent, tools=tools, callbacks=[callback])The Protectron SDK is a lightweight library that automatically captures compliance-relevant events from your AI applications. It provides the logging and audit trail capabilities required by Article 12 of the EU AI Act, without changing how your AI systems work.
Every LLM call, tool invocation, agent decision, and human intervention is automatically logged to your Protectron dashboard—creating a complete, tamper-evident audit trail.
The EU AI Act requires high-risk AI systems to have automatic logging capabilities that enable traceability.
The SDK is designed for production use with minimal overhead.
Works with how you already build AI applications.
pip install protectronRequirements: Python 3.8+
Framework packages:
pip install protectron[langchain] # LangChain
pip install protectron[crewai] # CrewAI
pip install protectron[autogen] # AutoGen
pip install protectron[all] # All frameworksnpm install @protectron/sdkRequirements: Node.js 18+
A System represents an AI application you want to track. Each system has a unique ID, risk level, and applicable compliance requirements.
A Trace groups related events into a logical unit—typically a conversation, session, or request.
with protectron.trace("user-session-123") as trace:
# All events within this block are grouped
result = agent.invoke({"input": user_message})Events are individual actions captured by the SDK:
| Event Type | Description |
|---|---|
| llm_call | LLM prompt and completion |
| tool_call | Tool/function invocation |
| agent_action | Agent decision or action |
| human_approval | Human approved an action |
| human_rejection | Human rejected an action |
| human_override | Human changed AI decision |
| risk_event | Anomaly or policy violation |
| error | Error or exception |
Callbacks are the primary integration mechanism. They hook into your AI framework and automatically capture events.
# LangChain callback
from protectron.langchain import ProtectronCallback
callback = ProtectronCallback(system_id="my-agent")
# CrewAI callback
from protectron.crewai import ProtectronCallback
callback = ProtectronCallback(system_id="my-crew")from langchain.agents import create_react_agent, AgentExecutor
from langchain_openai import ChatOpenAI
from langchain.tools import Tool
from langchain import hub
from protectron.langchain import ProtectronCallback
# 1. Create the Protectron callback
callback = ProtectronCallback(
system_id="support-agent", # Your system ID
environment="production", # production, staging, development
pii_redaction=True # Auto-redact PII
)
# 2. Set up your agent as normal
llm = ChatOpenAI(model="gpt-5.2")
tools = [
Tool(name="search", func=search_fn, description="Search knowledge base"),
Tool(name="ticket", func=create_ticket, description="Create support ticket"),
]
prompt = hub.pull("hwchase17/react")
agent = create_react_agent(llm, tools, prompt)
# 3. Add callback to executor
executor = AgentExecutor(
agent=agent,
tools=tools,
callbacks=[callback], # Add Protectron here
verbose=True
)
# 4. Run - all events automatically logged
result = executor.invoke({"input": "I need help with my order"})Trace: trc_abc123
├── 10:23:01.123 llm_call
│ ├── model: gpt-5.2
│ ├── input: "I need help with my order..."
│ ├── output: "I'll search for your order..."
│ └── tokens: 142 in, 89 out
│
├── 10:23:02.456 tool_call
│ ├── tool: search
│ ├── input: {"query": "order status"}
│ └── output: "Order #12345: Shipped..."
│
├── 10:23:03.789 llm_call
│ ├── model: gpt-5.2
│ ├── input: [conversation context]
│ └── output: "Your order #12345 has shipped..."
│
└── 10:23:04.012 trace_end
└── duration: 2.89s# Required
export PROTECTRON_API_KEY=pk_live_xxxxxxxxxxxxxxxx
# Optional
export PROTECTRON_ENVIRONMENT=production
export PROTECTRON_DEBUG=falsecallback = ProtectronCallback(
# Required
system_id="my-agent",
# Environment
environment="production", # production, staging, development
# Content logging
log_llm_content=True, # Log prompts and completions
log_tool_inputs=True, # Log tool parameters
log_tool_outputs=True, # Log tool results
# Privacy
pii_redaction=True, # Auto-redact emails, phones, etc.
pii_types=["email", "phone", "ssn", "credit_card"],
hash_user_ids=True, # Hash user identifiers
# Performance
async_mode=True, # Non-blocking (recommended)
buffer_size=1000, # Events before flush
flush_interval=5.0, # Seconds between flushes
# Sampling (for high volume)
sample_rate=1.0, # 1.0 = 100%, 0.1 = 10%
# Metadata
include_metadata={
"team": "customer-success",
"version": "1.2.0"
}
)Full support for LangChain's callback system: LLM calls, Agents, Chains, Tools, Memory
View GuideComplete multi-agent logging: Per-agent tracking, Task delegation, Inter-agent communication
View GuideMicrosoft AutoGen support: Conversation logging, Agent messages, Code execution, Human-in-the-loop
View GuideEdge-ready TypeScript integration for Next.js and Vercel deployments
View GuideLog human interventions for Article 14 compliance:
from protectron import Protectron
protectron = Protectron(system_id="supervised-agent")
# When human approves an AI action
protectron.log_human_approval(
action_type="refund_request",
action_details={"amount": 150.00, "order_id": "ORD-123"},
approved_by="supervisor@company.com"
)
# When human rejects an AI action
protectron.log_human_rejection(
action_type="refund_request",
action_details={"amount": 5000.00, "order_id": "ORD-456"},
rejected_by="supervisor@company.com",
rejection_reason="Amount exceeds auto-approval limit"
)
# When human overrides an AI decision
protectron.log_human_override(
original_decision="route_to_tier1",
override_decision="route_to_tier2",
overridden_by="supervisor@company.com",
override_reason="Complex technical issue"
)| Package | Language | Install |
|---|---|---|
| protectron | Python | pip install protectron |
| protectron[langchain] | Python | pip install protectron[langchain] |
| protectron[crewai] | Python | pip install protectron[crewai] |
| protectron[autogen] | Python | pip install protectron[autogen] |
| protectron[all] | Python | pip install protectron[all] |
| @protectron/sdk | TypeScript | npm install @protectron/sdk |
| @protectron/vercel-ai | TypeScript | npm install @protectron/vercel-ai |
Follow the Quick Start Guide to add compliance logging in 5 minutes.