Copyright (c) 2026 Cisuregen. All rights reserved.
| Version | Supported |
|---|---|
| 0.5.x | ✅ |
| 0.4.x | ✅ |
| < 0.4 | ❌ |
We take security vulnerabilities seriously. If you discover a security issue in CARF, please report it responsibly through the channels below.
- Email: [email protected]
- Response SLA: Acknowledgment within 48 hours
- Subject line:
[CARF Security] <brief description>
- Description of the vulnerability
- Steps to reproduce (proof of concept if possible)
- Affected component(s) and version(s)
- Potential impact assessment
- Any suggested fixes (optional)
- Do NOT create a public GitHub issue for security vulnerabilities
- Do NOT exploit the vulnerability beyond proof of concept
- Do NOT access or modify other users' data
- Do NOT perform denial-of-service testing on shared infrastructure
| Phase | Timeline |
|---|---|
| Acknowledgment | Within 48 hours |
| Initial assessment | Within 7 business days |
| Fix development | Within 30 days (critical), 90 days (other) |
| Coordinated disclosure | After fix is released or 90 days, whichever first |
We follow coordinated disclosure practices and will credit reporters in our security changelog unless anonymity is requested.
The following components are in scope for security reports:
- CARF backend (
src/): API endpoints, middleware, authentication - Guardian policy engine (
src/workflows/guardian.py): Policy bypass, escalation bypass - CSL policy service (
src/services/csl_policy_service.py): Rule evaluation, constraint bypass - Data handling (
src/services/data_loader.py,src/api/routers/datasets.py): Data exfiltration, injection - Causal/Bayesian engines (
src/services/causal.py,src/services/bayesian.py): Model poisoning, adversarial inputs - Authentication & authorization (
src/api/middleware.py): Auth bypass, privilege escalation - CYNEPIC Cockpit (
carf-cockpit/src/): XSS, CSRF, client-side injection - Configuration files (
config/): Policy tampering, unsafe defaults
- Third-party services (Neo4j, Kafka, OPA) — report to their maintainers
- LLM provider APIs (OpenAI, Anthropic, etc.) — report to the provider
- Demo/test data (
demo/data/) — synthetic data, no real PII - Development tooling (linters, test runners)
| Mode | Authentication | Rate Limiting | CORS |
|---|---|---|---|
| RESEARCH | None | Disabled | * (open) |
| STAGING | API Key (Bearer) | 300 req/min | * |
| PRODUCTION | API Key (Bearer) | 120 req/min | Restricted origins |
- Guardian Policy Engine: All actions evaluated against safety policies before execution
- CSL-Core Verification: Formal constraint verification with fail-closed safety
- Rate Limiting: Per-IP sliding window in STAGING/PRODUCTION
- Input Validation: Pydantic schema enforcement on all API inputs
- Audit Trail: Optional Kafka-based decision logging
- RESEARCH mode has no authentication — intended for local development only
- API key authentication uses a single shared key, not per-user tokens
- No role-based access control — all authenticated users have equal access
- Model artifacts (
models/*.pkl) are stored as Python pickles — deserialize only from trusted sources - Uploaded datasets are stored locally without encryption at rest
- Never commit API keys or credentials to version control
- Use
.envfiles locally (gitignored) and secure secret management in production - Rotate
CARF_API_KEYregularly - Set unique keys per environment (dev, staging, production)
# Required for STAGING/PRODUCTION
CARF_API_KEY=<strong-random-key>
CARF_CORS_ORIGINS=https://your-domain.com- Run behind a reverse proxy (nginx, Traefik) in production
- Enable HTTPS/TLS for all connections
- Restrict CORS to your specific domain(s)
- Consider WAF (Web Application Firewall) for public deployments
- Do not upload sensitive personal data (PII) to shared instances
- Consider encryption at rest for datasets in production
- Implement access controls for multi-tenant deployments
- Audit data access through Kafka audit trail
- CARF uses LLMs for routing and context assembly only
- Deterministic engines (causal, Bayesian) do not call LLMs
- Review LLM outputs before acting on high-stakes recommendations
- Monitor for prompt injection attempts in user queries
- Consider input sanitization for user-facing deployments
- Review Guardian policies before deployment (
config/policies.yaml) - Use OPA (Open Policy Agent) for complex policy evaluation in production
- Enable Kafka audit trail for compliance tracking
- Implement human approval workflows for high-risk actions via HumanLayer
- Added context-aware risk level checks in Guardian
- Added budget limit enforcement in proposed actions
- Added
budget_transfer,contract_sign,data_export,data_transfer,data_anonymizeto mandatory escalation list - Improved financial risk scoring with critical severity detection
- Enhanced CORS middleware ordering for consistent header delivery
- Increased staging rate limit to 300 req/min to prevent false 429s
- Added deployment profile system (RESEARCH / STAGING / PRODUCTION)
- Added API key authentication middleware
- Added rate limiting middleware
- Added CSL-Core formal policy verification
- Added federated governance policy service
- Initial security documentation
- Environment variable pattern for secrets
- Gitignore patterns for sensitive files
If you discover what appears to be proprietary information (trained model weights, calibration data, production threshold values, internal scoring matrices) that has been inadvertently committed to this repository, please report it to [email protected] rather than discussing it publicly.
Cisuregen treats certain implementation details as trade secrets under EU Directive 2016/943. Responsible reporting of inadvertent disclosures is appreciated and will be acknowledged.
Thank you for helping keep CARF secure.