Documentation

Protectron SDK

Add EU AI Act compliance to your AI applications in minutes.

# Three lines to compliance logging
from protectron.langchain import ProtectronCallback

callback = ProtectronCallback(system_id="my-ai-agent")
executor = AgentExecutor(agent=agent, tools=tools, callbacks=[callback])

What is the Protectron SDK?

The Protectron SDK is a lightweight library that automatically captures compliance-relevant events from your AI applications. It provides the logging and audit trail capabilities required by Article 12 of the EU AI Act, without changing how your AI systems work.

Every LLM call, tool invocation, agent decision, and human intervention is automatically logged to your Protectron dashboard—creating a complete, tamper-evident audit trail.

Why Use the SDK?

Article 12 Compliance

The EU AI Act requires high-risk AI systems to have automatic logging capabilities that enable traceability.

  • Automatic event logging — No manual instrumentation needed
  • Immutable audit trail — Cryptographically chained records
  • Complete traceability — Reconstruct any AI decision
  • Evidence generation — Export compliance reports

Zero Performance Impact

The SDK is designed for production use with minimal overhead.

  • Async by default — Non-blocking event delivery
  • <1ms latency — No perceptible slowdown
  • Resilient — Graceful degradation if network fails
  • Lightweight — Minimal memory footprint

Framework Native

Works with how you already build AI applications.

  • LangChain — Native callback integration
  • CrewAI — Multi-agent crew logging
  • AutoGen — Conversation tracking
  • Vercel AI SDK — Edge-ready TypeScript
  • Custom — Simple API for any framework

Supported Languages

Python

pip install protectron

Requirements: Python 3.8+

Framework packages:

pip install protectron[langchain]   # LangChain
pip install protectron[crewai]      # CrewAI
pip install protectron[autogen]     # AutoGen
pip install protectron[all]         # All frameworks

TypeScript / JavaScript

npm install @protectron/sdk

Requirements: Node.js 18+

Core Concepts

Systems

A System represents an AI application you want to track. Each system has a unique ID, risk level, and applicable compliance requirements.

Traces

A Trace groups related events into a logical unit—typically a conversation, session, or request.

with protectron.trace("user-session-123") as trace:
    # All events within this block are grouped
    result = agent.invoke({"input": user_message})

Events

Events are individual actions captured by the SDK:

Event TypeDescription
llm_callLLM prompt and completion
tool_callTool/function invocation
agent_actionAgent decision or action
human_approvalHuman approved an action
human_rejectionHuman rejected an action
human_overrideHuman changed AI decision
risk_eventAnomaly or policy violation
errorError or exception

Callbacks

Callbacks are the primary integration mechanism. They hook into your AI framework and automatically capture events.

# LangChain callback
from protectron.langchain import ProtectronCallback
callback = ProtectronCallback(system_id="my-agent")

# CrewAI callback
from protectron.crewai import ProtectronCallback
callback = ProtectronCallback(system_id="my-crew")

Quick Example

LangChain Agent

from langchain.agents import create_react_agent, AgentExecutor
from langchain_openai import ChatOpenAI
from langchain.tools import Tool
from langchain import hub
from protectron.langchain import ProtectronCallback

# 1. Create the Protectron callback
callback = ProtectronCallback(
    system_id="support-agent",      # Your system ID
    environment="production",        # production, staging, development
    pii_redaction=True              # Auto-redact PII
)

# 2. Set up your agent as normal
llm = ChatOpenAI(model="gpt-5.2")
tools = [
    Tool(name="search", func=search_fn, description="Search knowledge base"),
    Tool(name="ticket", func=create_ticket, description="Create support ticket"),
]
prompt = hub.pull("hwchase17/react")
agent = create_react_agent(llm, tools, prompt)

# 3. Add callback to executor
executor = AgentExecutor(
    agent=agent,
    tools=tools,
    callbacks=[callback],  # Add Protectron here
    verbose=True
)

# 4. Run - all events automatically logged
result = executor.invoke({"input": "I need help with my order"})

What Gets Logged

Trace: trc_abc123
├── 10:23:01.123 llm_call
│   ├── model: gpt-5.2
│   ├── input: "I need help with my order..."
│   ├── output: "I'll search for your order..."
│   └── tokens: 142 in, 89 out
│
├── 10:23:02.456 tool_call
│   ├── tool: search
│   ├── input: {"query": "order status"}
│   └── output: "Order #12345: Shipped..."
│
├── 10:23:03.789 llm_call
│   ├── model: gpt-5.2
│   ├── input: [conversation context]
│   └── output: "Your order #12345 has shipped..."
│
└── 10:23:04.012 trace_end
    └── duration: 2.89s

Configuration

Environment Variables

# Required
export PROTECTRON_API_KEY=pk_live_xxxxxxxxxxxxxxxx

# Optional
export PROTECTRON_ENVIRONMENT=production
export PROTECTRON_DEBUG=false

Callback Options

callback = ProtectronCallback(
    # Required
    system_id="my-agent",
    
    # Environment
    environment="production",       # production, staging, development
    
    # Content logging
    log_llm_content=True,          # Log prompts and completions
    log_tool_inputs=True,          # Log tool parameters
    log_tool_outputs=True,         # Log tool results
    
    # Privacy
    pii_redaction=True,            # Auto-redact emails, phones, etc.
    pii_types=["email", "phone", "ssn", "credit_card"],
    hash_user_ids=True,            # Hash user identifiers
    
    # Performance
    async_mode=True,               # Non-blocking (recommended)
    buffer_size=1000,              # Events before flush
    flush_interval=5.0,            # Seconds between flushes
    
    # Sampling (for high volume)
    sample_rate=1.0,               # 1.0 = 100%, 0.1 = 10%
    
    # Metadata
    include_metadata={
        "team": "customer-success",
        "version": "1.2.0"
    }
)

Human Oversight (Article 14)

Log human interventions for Article 14 compliance:

from protectron import Protectron

protectron = Protectron(system_id="supervised-agent")

# When human approves an AI action
protectron.log_human_approval(
    action_type="refund_request",
    action_details={"amount": 150.00, "order_id": "ORD-123"},
    approved_by="supervisor@company.com"
)

# When human rejects an AI action
protectron.log_human_rejection(
    action_type="refund_request",
    action_details={"amount": 5000.00, "order_id": "ORD-456"},
    rejected_by="supervisor@company.com",
    rejection_reason="Amount exceeds auto-approval limit"
)

# When human overrides an AI decision
protectron.log_human_override(
    original_decision="route_to_tier1",
    override_decision="route_to_tier2",
    overridden_by="supervisor@company.com",
    override_reason="Complex technical issue"
)

SDK Packages

PackageLanguageInstall
protectronPythonpip install protectron
protectron[langchain]Pythonpip install protectron[langchain]
protectron[crewai]Pythonpip install protectron[crewai]
protectron[autogen]Pythonpip install protectron[autogen]
protectron[all]Pythonpip install protectron[all]
@protectron/sdkTypeScriptnpm install @protectron/sdk
@protectron/vercel-aiTypeScriptnpm install @protectron/vercel-ai

Ready to Get Started?

Follow the Quick Start Guide to add compliance logging in 5 minutes.