Citation infrastructure for AI agents — open source, MCP-native, self-hostable. Powers Signoff for AI-drafted regulated documents.
-
Updated
May 16, 2026 - Python
Citation infrastructure for AI agents — open source, MCP-native, self-hostable. Powers Signoff for AI-drafted regulated documents.
Behavioral Trust Clustering a thermodynamic governance layer that reduces LLM hallucination by 52% on HumanEval. Drop-in wrapper for any decoder. MIT.
Agility infrastructure for regulated AI. Trace, Policy, Evidence under EU jurisdiction. Apache 2.0, on-premise capable.
Interpretability-first ML pipeline wrapper for regulated industries
Secure CI/CD and infrastructure patterns for regulated AI/ML deployments
A modular architecture for AI-powered regulated investigation. 8 universal base components · 7 plug-and-play clusters · 6 deployment profiles.
Governance patterns for autonomous AI agents in regulated financial services — DEFCON state machine, Sovereign Veto, Audit Chain, EU AI Act mapping
Policy enforcement for AI agents in regulated environments (FERPA, HIPAA, GLBA, GDPR): framework adapters for CrewAI, AutoGen, LangChain, Semantic Kernel, Haystack
Regulated multi-agent operations platform with governance, review, and audit-friendly workflows.
Cryptographic receipts for agent traces: hash chain, Ed25519, JCS canonical.
ComplianceOS — KI-gestuetzte On-Premise Compliance-Audit-Plattform (ISO 27001, NIS2, BSI & mehr)
Proof of Insight — specification, schemas, and governance
FERPA-compliant document filter for Haystack RAG pipelines — identity-scoped pre-filtering before LLM context
Public, governed documentation for Keon Systems. Defines claims, architecture, and verification paths for governing AI and automated decisions.
Deny-by-default policy decisions with symmetric receipts at the tool boundary.
kolm — the AI compiler. Compile any task into a signed .kolm artifact that runs locally. RS-1 open spec, MIT.
Building SIR: deterministic pre-inference governance gate. Blocks policy-breaking requests. Signed, offline-verifiable audits for regulated/insurable AI. MIT.
Policy-first, OpenAI-compatible governance gateway for regulated AI workloads. Enforces runtime policy evaluation, PHI/PII redaction, retrieval authorization, and tamper-evident audit trails in the critical path of every LLM and RAG request. Built for healthcare, financial services, and sovereign AI deployments.
Add a description, image, and links to the regulated-ai topic page so that developers can more easily learn about it.
To associate your repository with the regulated-ai topic, visit your repo's landing page and select "manage topics."