You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+11-3Lines changed: 11 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -40,6 +40,7 @@ ElectriPy Studio is a curated collection of production-ready Python components a
40
40
- 🤖 **AI building blocks**: Provider-agnostic LLM Gateway with sync/async clients and structured-output helpers, plus a RAG Evaluation Runner for retrieval benchmarking.
41
41
- 📊 **AI Telemetry**: Provider-agnostic telemetry primitives and adapters (JSONL, optional OpenTelemetry) for HTTP resilience, LLM gateway, policy decisions, and RAG evaluation runs.
42
42
- 🧠 **AI product engineering utilities**: Streaming chat primitives, deterministic agent runtime helpers, RAG quality/drift metrics, grounding checks for hallucination reduction, response robustness helpers for structured outputs, prompt templating and composition, token budget tracking and truncation, priority-based context window assembly, rule-based model routing, sliding-window conversation memory, and a declarative tool registry with JSON schema generation.
43
+
- 🛡️ **AI policy and collaboration runtime**: Deterministic policy gateway checks for preflight/postflight/stream/tool flows, plus bounded agent-to-agent collaboration runtime for specialist orchestration patterns.
43
44
44
45
## Quick Start
45
46
@@ -123,8 +124,11 @@ Full documentation is available in the [docs/](docs/) directory:
Copy file name to clipboardExpand all lines: docs/index.md
+5Lines changed: 5 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,6 +20,8 @@ ElectriPy Studio is a curated collection of production-ready Python components a
20
20
-**AI & LLM Gateway**: Provider-agnostic LLM clients with structured output and safety seams, plus a RAG Evaluation Runner for benchmarking retrieval quality.
21
21
-**AI Telemetry**: Provider-agnostic telemetry primitives and adapters for HTTP resilience, LLM gateway, policy decisions, and RAG evaluation, with a safe-by-default posture.
The Agent Collaboration Runtime orchestrates bounded, deterministic handoffs between specialist agents.
4
+
5
+
## Why it exists
6
+
7
+
As AI systems move from single-agent flows to specialist-agent teams, reliability depends on explicit message contracts and hop limits. This runtime coordinates those handoffs in-process and works with the Policy Gateway for safety.
8
+
9
+
## Core concepts
10
+
11
+
-`CollaborationTask`: top-level objective and metadata.
12
+
-`AgentMessage`: typed message envelope between agents.
13
+
-`CollaborationAgentPort`: handler protocol each agent implements.
0 commit comments