Add EU AI Act compliance to your AI applications with a few lines of code. Automatic audit trails, human oversight tracking, and compliance evidence—without changing how you build.
EU AI Act compliance isn't just documentation—it requires operational changes to how your AI systems work.
Instead of bolting compliance onto finished systems, integrate it during development. Every AI action is logged from day one.
Our SDK hooks into your existing frameworks. Add a callback or middleware—your business logic stays unchanged.
Logs become compliance evidence automatically. No manual documentation of "what the AI did."
Every production run generates compliance data. You're not compliant once—you're compliant continuously.
# Install the core SDK
pip install protectron
# With framework integrations
pip install protectron[langchain]
pip install protectron[crewai]
pip install protectron[autogen]
# Or install everything
pip install protectron[all]Python 3.8+ • No native dependencies • Linux, macOS, Windows
# npm
npm install @protectron/sdk
# yarn
yarn add @protectron/sdk
# pnpm
pnpm add @protectron/sdk
# Framework packages
npm install @protectron/langchain
npm install @protectron/vercel-aiNode.js 18+ • TypeScript 4.7+ • ESM and CommonJS
Sign up at dashboard.protectron.ai and create an API key.
export PROTECTRON_API_KEY=pk_live_xxxxxxxxxxxxxConfigure with your system ID and environment.
from protectron import Protectron
protectron = Protectron(
system_id="my-ai-system",
environment="production"
)Integrate with LangChain, CrewAI, or any framework.
from protectron.langchain import ProtectronCallback
agent = create_react_agent(
llm=ChatOpenAI(model="gpt-5.2"),
tools=my_tools,
callbacks=[ProtectronCallback(system_id="support-agent")]
)
# Use normally - all actions are logged
result = agent.invoke({"input": "Help me with my order"})Open dashboard.protectron.ai to see logged events in real-time.
Log any event with structured data. Custom event types, metadata, and context.
Group related events into traces. Nested spans for sub-operations.
Structured logging for LLM interactions. Model, input, output, tokens, latency.
Capture tool/function invocations with parameters and results.
Document when your AI makes choices. Options, selection, confidence, reasoning.
Track human interventions for Article 14 compliance. Approvals, rejections, overrides.
Fine-tune logging behavior, privacy settings, and performance options
from protectron import Protectron
protectron = Protectron(
# Required
api_key="pk_live_xxx",
system_id="my-system",
# Environment
environment="production",
version="1.2.0",
# Content options
log_llm_content=True,
log_tool_inputs=True,
log_tool_outputs=True,
# Privacy
pii_redaction=True,
hash_user_ids=True,
# Performance
async_mode=True,
buffer_size=1000,
flush_interval=5.0,
)Non-blocking, resilient, and minimal overhead
< 1ms
per event
Async Mode Latency
10,000+
per SDK instance
Events Per Second
~2MB
default buffer
Memory Usage
1,000+
events/second batched
Upload Throughput
Yes. Protectron complements tools like LangSmith, Datadog, and others. Use them for debugging and performance monitoring, use Protectron for compliance. Both can run simultaneously.
Enable persist_on_failure to write events to disk if upload fails. Events are recovered and sent when your application restarts.
Yes. Set enabled=False to disable logging entirely, or use environment='development' to separate dev data from production.
Use pii_redaction=True for automatic PII detection, log_llm_content=False to skip prompt/completion content, or exclude_tools to skip specific tool outputs.
The SDK is open source and available on GitHub. The cloud dashboard and storage are hosted services included in your subscription.
Our roadmap includes Haystack, DSPy, and Semantic Kernel. Request integrations at feedback@protectron.ai or via the dashboard.
Add EU AI Act compliance to your AI applications today. Install the SDK, add a callback, and you're logging.