KafGraph is the distributed shared brain of collaborating agents — a graph database and reflection engine written in Go. It ingests agent conversation data from Apache Kafka topics, structures it as a property graph, and provides temporal reflection services that allow agents and teams to learn from past interactions.
make build # Build the kafgraph binary
make build-linux # Cross-compile for linux/amd64
make install # Install to $GOPATH/bin
make clean # Remove build artifactsmake test # Run all unit + E2E tests
make test-unit # Unit tests only
make test-e2e # End-to-end tests (in-process, uses temp BadgerDB)
make test-integration # Integration tests (requires docker-compose)
make test-fuzz # Fuzz tests (Go native fuzzing)
make test-race # Unit + E2E with -race detectormake lint # Run golangci-lint
make lint-fix # Auto-fix lint issues
make vet # go vet
make fmt # gofmt -w
make fmt-check # Check formatting (CI gate)make cover # Generate coverage report
make cover-html # Open HTML coverage report
make cover-check # Fail if coverage < 80%make docker-build # Build multi-stage Docker image
make docker-push # Push to registry
make docker-up # Start dev environment (MinIO + Kafka + KafGraph)
make docker-down # Stop dev environment
make docker-logs # Tail dev environment logsmake docs-serve # Serve Jekyll docs locally (http://localhost:4000)
make docs-build # Build static Jekyll site
make docs-sync-check # Verify docs match code (CI gate)make req-index # Regenerate SPEC/FR/INDEX.md
make spec-check # Validate requirement filesmake commit-check # Pre-push gate: fmt + vet + race + coverage
make release-check # Full pre-release gate: lint + test + cover + docs + fmt
make ci # Simulate CI pipeline locallyNever commit directly to main after the initial scaffold. Configure GitHub branch protection rules: require PR with 0 approvals minimum, merge only after all CI checks pass. A single broken commit can block the workflow for hours — clean code in main is worth the discipline.
Every push must pass: go fmt, go vet, go test -race, 80% coverage minimum, and
govulncheck. Run make commit-check before every push. The pre-commit hook
(hack/pre-commit.sh) enforces formatting and linting automatically.
golangci-lint— static analysis with strict configgo vet— correctness checksgo test -race— race condition detection on all tests- Coverage gate — 80% minimum, 100% on critical paths (storage, graph, server)
gofmtcheck — consistent formatting- License header check — Apache 2.0 on all
.gofiles gosec— security-focused static analysis (blocks merge on findings)govulncheck— known vulnerability scanning in dependencies- Docs sync check — documentation matches code
Instead of CodeQL ($30/month), we use two free tools that cover the same ground:
- gosec — Go-specific security linter (SQL injection, command injection, hardcoded credentials, weak crypto, etc.). Runs via golangci-lint on every PR.
- govulncheck — Go's official vulnerability checker. Scans dependencies against the
Go vulnerability database. Runs on every PR and locally via
make vulncheck.
Both block merge on findings. Run make sec locally to check both.
- Require pull request before merging (0 reviews minimum)
- Require status checks to pass:
ci,security - No direct pushes to main after initial setup
- Go version: 1.25+ with
CGO_ENABLED=0(static linking) - Linter: golangci-lint with config from
.golangci.yml - Coverage: 80% minimum enforced by
hack/coverage.sh; 100% on critical packages - Race detection: all tests run with
-racein CI - Security: gosec + govulncheck on all PRs (free, blocks merge on findings)
- License: Apache 2.0 header on all
.gofiles (enforced byhack/license-header.sh) - Sync gate: every public API change must update
docs/and tests (release gate) - Requirements: tracked as
SPEC/FR/req-NNN.mdwith monotonic numbering - Phases: tracked in
SPEC/PLAN.md - Commit messages: imperative mood, reference
req-NNNwhere applicable - No direct commits to main: always use PRs after initial scaffold
- SPEC/initial-idea.md — project vision and motivation
- SPEC/requirements.md — full functional, non-functional, and integration requirements
- SPEC/solution-design.md — technology selection, layer architecture, data model
- SPEC/about-agent-brains.md — agent brain concept and design rationale
- SPEC/kafclaw-topic-reference.md — KafClaw topic hierarchy and wire format
- SPEC/PLAN.md — phase tracker (0: Foundation → 8: Hardening)
| Package | Purpose |
|---|---|
cmd/kafgraph |
Entry point, CLI setup |
internal/config |
Viper-based configuration loader |
internal/graph |
Core graph API (CRUD nodes/edges, property graph model) |
internal/storage |
Storage engines (BadgerDB default) |
internal/reflect |
Reflection Engine (scheduler, cycle runner, scorer, feedback checker) |
internal/server |
Bolt v4 protocol, HTTP API, Brain Tool API |
- KafScale (
github.com/scalytics/platform) — Go infrastructure patterns, Makefile, Docker, CI/CD - KafClaw (
github.com/kamir/KafClaw) — Agent skills system, SKILL.md manifests, Jekyll docs
Each brain tool is defined as a skill in skills/brain_*/SKILL.md:
brain_search— Semantic search across the knowledge graphbrain_recall— Load accumulated agent contextbrain_capture— Write insights/decisions into the brainbrain_recent— Browse recent activitybrain_patterns— Surface recurring themes and connectionsbrain_reflect— Trigger on-demand reflection cyclebrain_feedback— Submit human feedback on reflection cycles