Complete guide to integrating Protectron with CrewAI for EU AI Act compliance logging of multi-agent systems.
Multi-agent systems introduce unique compliance challenges. The EU AI Act's Article 12 requires logging of "events relevant to identifying risks." For multi-agent systems, that means tracking every agent's contribution.
| Challenge | Risk | Protectron Solution |
|---|---|---|
| Agent attribution | "Which agent made this decision?" | Per-agent event tagging |
| Delegation chains | "Why did Agent B get this task?" | Delegation logging |
| Emergent behavior | "We didn't expect this outcome" | Full execution trace |
| Accountability | "Who's responsible?" | Clear agent → action mapping |
| Reproducibility | "Can we explain what happened?" | Complete audit trail |
# Install with CrewAI support
pip install protectron[crewai]
# Or install separately
pip install protectron crewai crewai-toolsRequirements:
from protectron.crewai import ProtectronCallback
# Basic initialization
callback = ProtectronCallback(
system_id="my-crew"
)
# Full configuration
callback = ProtectronCallback(
system_id="my-crew",
environment="production",
# CrewAI-specific options
log_agent_thoughts=True, # Log agent reasoning
log_delegation=True, # Log task delegation
log_collaboration=True, # Log inter-agent communication
# Content options
log_llm_content=True, # Log prompts/completions
log_tool_outputs=True, # Log tool results
# Privacy
pii_redaction=True, # Auto-redact PII
)from crewai import Agent, Task, Crew, Process
# Define agents
researcher = Agent(
role="Research Analyst",
goal="Find accurate and relevant information",
backstory="You are an expert researcher with attention to detail.",
verbose=True
)
writer = Agent(
role="Content Writer",
goal="Create engaging and informative content",
backstory="You are a skilled writer who makes complex topics accessible.",
verbose=True
)
# Define tasks
research_task = Task(
description="Research the EU AI Act requirements for high-risk AI systems",
agent=researcher,
expected_output="A detailed summary of key requirements"
)
writing_task = Task(
description="Write a compliance guide based on the research",
agent=writer,
expected_output="A clear, actionable compliance guide"
)
# Create crew with Protectron callback
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, writing_task],
process=Process.sequential,
callbacks=[callback] # Add Protectron here
)
# Run crew - all agent actions logged
result = crew.kickoff()
print(result)Agents work one after another, each building on the previous agent's work.
Events logged:
A manager agent delegates tasks to worker agents. Perfect for complex projects.
Events logged:
Agents can request human input at critical decision points for Article 14 compliance.
Events logged:
Track tool usage across agents with full attribution.
Events logged:
For Article 14 compliance, add human oversight to critical crew actions:
from crewai import Agent, Task, Crew
from protectron.crewai import ProtectronCallback
from protectron import Protectron
callback = ProtectronCallback(system_id="supervised-crew")
protectron = Protectron(system_id="supervised-crew")
class SupervisedCrew:
"""Crew wrapper with human oversight capabilities."""
def __init__(self, crew: Crew, require_approval_for: list[str] = None):
self.crew = crew
self.require_approval_for = require_approval_for or []
def kickoff(self, approver_email: str):
"""Run crew with human oversight checkpoints."""
# Pre-execution approval for sensitive crews
if "crew_start" in self.require_approval_for:
if not self._get_approval("crew_start", approver_email):
return {"status": "rejected", "reason": "Crew execution not approved"}
# Run the crew
result = self.crew.kickoff()
# Post-execution approval for critical outputs
if "crew_output" in self.require_approval_for:
approved = self._get_approval("crew_output", approver_email)
if approved:
protectron.log_human_approval(
action_type="crew_output",
action_details={"output": str(result)[:500]},
approved_by=approver_email
)
else:
protectron.log_human_rejection(
action_type="crew_output",
action_details={"output": str(result)[:500]},
rejected_by=approver_email,
rejection_reason="Output not approved for use"
)
return {"status": "approved", "result": result}
# Usage
supervised = SupervisedCrew(
crew=crew,
require_approval_for=["crew_output"]
)
result = supervised.kickoff(approver_email="supervisor@company.com")callback = ProtectronCallback(
system_id="granular-logging",
# Agent-specific settings
agent_config={
"Research Analyst": {
"log_thoughts": True,
"log_tool_outputs": True
},
"Content Writer": {
"log_thoughts": False, # Don't log creative process
"log_tool_outputs": True
}
}
)callback = ProtectronCallback(
system_id="sensitive-crew",
# Don't log content for specific tasks
exclude_task_content=[
"Process customer PII",
"Handle payment information"
],
# Redact specific patterns
custom_redaction_patterns={
"credit_card": r"\d{4}-\d{4}-\d{4}-\d{4}",
"api_key": r"sk-[a-zA-Z0-9]{32,}"
}
)callback = ProtectronCallback(
system_id="high-volume-crew",
# Reduce logging verbosity
log_agent_thoughts=False,
log_llm_content=False, # Metadata only
# Sample events
sample_rate=0.5, # Log 50% of events
# Batch settings
buffer_size=2000,
flush_interval=15.0
)Your Protectron dashboard shows real-time crew execution status:
┌─────────────────────────────────────────────────────────────────┐
│ Crew: content-creation-crew │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Status: Running Elapsed: 00:03:45 │
│ │
│ Agents: │
│ ┌──────────────────┬────────────┬───────────┬─────────────┐ │
│ │ Agent │ Status │ Actions │ Tools Used │ │
│ ├──────────────────┼────────────┼───────────┼─────────────┤ │
│ │ Research Analyst │ ✓ Complete │ 12 │ 5 │ │
│ │ Content Writer │ ● Active │ 3 │ 0 │ │
│ │ Editor │ ○ Pending │ 0 │ 0 │ │
│ └──────────────────┴────────────┴───────────┴─────────────┘ │
│ │
│ Tasks: │
│ [████████████████████░░░░░░░░░░] 2/3 Complete │
│ │
└─────────────────────────────────────────────────────────────────┘from protectron import Protectron
protectron = Protectron(system_id="my-crew")
# Get crew execution summary
summary = protectron.get_crew_summary(
crew_run_id="run_abc123",
include_agent_breakdown=True
)
# Returns:
# {
# "total_duration": 245.3,
# "tasks_completed": 3,
# "total_agent_actions": 28,
# "total_tool_calls": 12,
# "agents": {
# "Research Analyst": {"actions": 15, "tool_calls": 10},
# "Content Writer": {"actions": 13, "tool_calls": 2}
# },
# "errors": 0,
# "human_interventions": 1
# }Every event is tagged with the agent's role and ID. The audit trail shows a clear mapping from agent to action to outcome, making it easy to attribute decisions.
Yes, set log_delegation=True to capture when a manager agent delegates tasks to workers, including the task description, assigned agent, and context.
With async_mode=True (default), logging is non-blocking. For complex crews, typical overhead is less than 1% of total execution time.
Use the Protectron client directly: protectron.log_human_approval() when humans approve crew outputs or intervene in crew execution.
Yes, use agent_config to set per-agent logging preferences, or exclude_agents to skip specific agents entirely.
All events up to the failure are logged, including the error details, which agent was active, and the crew state at failure time.
Add EU AI Act compliance to your multi-agent systems in minutes.