Hi @MervinPraison,
I built a runtime to sandbox AI agent tasks. I think it could be a great lightweight alternative to run untrusted code locally for PraisonAI.
Here's an example using the LangChain integration (langchain-capsule):
from langchain_capsule import CapsulePythonTool
code = """
def factorial(n):
if n <= 1:
return 1
return n * factorial(n - 1)
factorial(6)
"""
python_tool = CapsulePythonTool()
result = python_tool.run(code)
print(result) # "720"
Only the first run takes about a second (cold start), then every next run starts in ~10ms.
Why this could be useful for PraisonAI:
- Simple to adopt β No setup, no cloud dependency and works the same everywhere (dev, CI, prod).
- No Docker complexity β When running containers in production (e.g., via Kubernetes), using another Docker to isolate untrusted code adds complexity. It avoids Docker-in-Docker complexity entirely.
- Strong isolation for untrusted code β Each execution runs in its own WebAssembly sandbox, with memory space isolation and no host access.
I see PraisonAI is compatible with LangChain, but I'd be happy to build a custom integration specifically for PraisonAI if that's more suitable.
Here are the relevant links:
LangChain integration: github.com/mavdol/langchain-capsule
Main Capsule repo: github.com/mavdol/capsule
Hope this helps!
Hi @MervinPraison,
I built a runtime to sandbox AI agent tasks. I think it could be a great lightweight alternative to run untrusted code locally for PraisonAI.
Here's an example using the LangChain integration (
langchain-capsule):Only the first run takes about a second (cold start), then every next run starts in ~10ms.
Why this could be useful for PraisonAI:
I see PraisonAI is compatible with LangChain, but I'd be happy to build a custom integration specifically for PraisonAI if that's more suitable.
Here are the relevant links:
LangChain integration: github.com/mavdol/langchain-capsule
Main Capsule repo: github.com/mavdol/capsule
Hope this helps!