Distributed multi-agent framework for Pipecat. Each agent runs its own Pipecat pipeline and communicates with other agents through a shared message bus.
uv sync --group dev # Install dependencies
uv run pytest # Run tests
uv run ruff check . # Lint
uv run ruff format # FormatAgents communicate through a shared AgentBus. A typical voice-first system has:
- Main agent (
BaseAgent): owns the transport (STT/TTS) with aBusBridgeProcessorwhere an LLM would normally go. - Voice/LLM agents (
LLMAgent(bridged=())): run their own LLM pipeline, receive frames from the bridge, transfer between each other. - Worker agents (
BaseAgent): receive tasks, process them, return results.
BaseAgent -- pipeline lifecycle, parent-child, tasks, activation
BaseAgent(bridged=()) -- adds edge processors for bus frame routing (all bridges)
BaseAgent(bridged=("voice",)) -- edge processors filtered to named bridges
LLMAgent -- build_llm(), @tool registration, message injection on activation
LLMContextAgent -- LLMAgent with built-in LLMContext + LLMContextAggregatorPair
FlowsAgent -- Pipecat Flows integration (node-based conversation, always bridged)
src/pipecat_subagents/agents/base_agent.py-- BaseAgent, _BusEdgeProcessor, AgentActivationArgs, AgentReadyData, AgentErrorDatasrc/pipecat_subagents/agents/llm/llm_agent.py-- LLMAgent, LLMAgentActivationArgssrc/pipecat_subagents/agents/llm/llm_context_agent.py-- LLMContextAgentsrc/pipecat_subagents/agents/llm/tool_decorator.py-- @tool decoratorsrc/pipecat_subagents/agents/flows/flows_agent.py-- FlowsAgentsrc/pipecat_subagents/agents/task_context.py-- TaskContext, TaskGroup, TaskGroupContext, TaskGroupEvent, TaskGroupResponse, TaskGroupError, TaskStatussrc/pipecat_subagents/bus/bus.py-- AgentBus abstract basesrc/pipecat_subagents/bus/bridge_processor.py-- BusBridgeProcessor (supports named bridges)src/pipecat_subagents/bus/messages.py-- All bus message types (BusFrameMessage has bridge field)src/pipecat_subagents/registry/registry.py-- AgentRegistry (async watch with immediate fire)src/pipecat_subagents/runner/runner.py-- AgentRunner
activeflag lives onBaseAgent(defaults toFalse)activate_agent(name)/deactivate_agent(name)send bus messages, handled byBaseAgenton_activated(args)/on_deactivated()hooks fire on the target agenthandoff_to(name)onBaseAgentis a convenience: deactivates self locally, then activates targetBaseAgent.handoff_totakesactivation_args=.LLMAgent.handoff_toaddsmessages=(spoken before transfer) andresult_callback=LLMAgent.endtakesmessages=(spoken before ending) andresult_callback=activate_agentacceptsOptional[AgentActivationArgs](dataclass, not Pydantic)
- Only root agents (added via
AgentRunner.add_agent()) are announced to remote runners via the registry - Child agents (added via
BaseAgent.add_agent()) are private to the parent; lifecycle (end, cancel) is propagated automatically add_agent()does NOT auto-watch. To be notified when a child is ready, usewatch_agent(name)or@agent_ready(name="name")on_agent_readyonly fires for agents explicitly watched viawatch_agent(name)or the@agent_readydecorator@agent_ready(name="name")is declarative sugar forwatch_agent. Watches are usually registered after the agent is ready and activated, so handlers won't fire prematurelywatch_agent()/registry.watch()fires immediately if the agent is already registered- Runner names must be unique across distributed setups (auto-generated with UUID by default)
BusBridgeProcessor(bridge="voice")tags outgoing frames and filters incoming by bridge nameBaseAgent(bridged=("voice",))only accepts frames from the "voice" bridgeBaseAgent(bridged=())accepts frames from all bridges (default when bridged)BusFrameMessage.bridgefield carries the bridge name (None for unnamed)- Enables parallel pipelines (voice + video) or multiple agents on the same bridge
Two patterns for sending work to agents:
Context managers (structured, recommended):
task(agent_name, payload=, timeout=)returnsTaskContextfor a single agenttask_group(*agent_names, payload=, timeout=)returnsTaskGroupContextfor parallel dispatch- Both are async context managers that wait for responses on exit
- Both support
async for event in ctxto receive intermediate events (updates, streaming) TaskContextyieldsTaskEvent,TaskGroupContextyieldsTaskGroupEvent(includesagent_name)- On error inside the
async withblock (includingCancelledError), the task is automatically cancelled - Cleanup is shielded so it completes even during tool interruption
Fire-and-forget:
request_task(agent_name, payload=, timeout=)sends work, returnstask_idrequest_task_group(*agent_names, payload=, timeout=)sends to multiple agents, returnstask_id- Use callbacks (
on_task_response,on_task_completed) to handle results - Caller must cancel manually if needed (e.g. on tool interruption)
Both patterns wait for agents to be ready (via registry) before sending requests. Task completion does NOT end the agent's pipeline; agents stay alive for reuse.
- Workers receive
on_task_request(message)or use@taskdecorated handlers - All
send_task_*methods require an explicittask_idargument (frommessage.task_id) send_task_response(task_id, response, status=)sends result and removes the task fromactive_taskssend_task_update(task_id, update)sends progress without completingsend_task_stream_start/data/end(task_id, data)for streaming resultsactive_tasksproperty returnsdict[str, BusTaskRequestMessage]of in-flight tasks- Task handlers always run in their own asyncio task so the bus message loop is never blocked
- Multiple tasks can be in flight simultaneously
- When the agent stops, any still-active tasks are automatically reported as
CANCELLED
cancel_task(task_id, reason=)sendsBusTaskCancelMessageto all agents in the group- Worker receives
on_task_cancelled(message), then auto-sends aCANCELLEDresponse - Context managers (
task(),task_group()) cancel automatically on exception orCancelledError - For fire-and-forget tasks, the user must cancel manually. Pattern for tool interruption:
task_ids = []
try:
task_ids.append(await self.request_task("w1", payload=data))
task_ids.append(await self.request_task("w2", payload=data))
# ... do work ...
except asyncio.CancelledError:
for tid in task_ids:
await self.cancel_task(tid, reason="tool cancelled")
raiseAll task hooks receive the bus message directly (not individual arguments):
on_task_request(message: BusTaskRequestMessage)on_task_response(message: BusTaskResponseMessage)on_task_error(message: BusTaskResponseMessage)on_task_update(message: BusTaskUpdateMessage)on_task_update_requested(message: BusTaskUpdateRequestMessage)on_task_completed(result: TaskGroupResponse)on_task_stream_start/data/end(message: BusTaskStream*Message)on_task_cancelled(message: BusTaskCancelMessage)
- Google-style docstrings
- Docstrings explain purpose, not implementation. Don't describe which internal methods are called or how data flows internally. Do explain what developers need to know to use or extend the API.
- Don't enumerate specific message fields in hook docstrings; the type signature is sufficient
- No em dashes in docstrings or documentation. Use periods, colons, semicolons, or commas instead.
- Public methods: document with Args/Returns/Raises as needed
- Private methods (starting with
_): don't add docstrings unless the logic is non-obvious - Use backticks for code references in docstrings
- Lifecycle hooks should always call
super()(e.g.await super().on_activated(args)) - No Pydantic in agent layer; use dataclasses with
from_dict()/to_dict()for serialization