Kentra supports centralized logging with FluentBit and Loki.
With a standard Kentra installation, logs are written directly to the container's default output (stdout/stderr).
To centralize logs from all commands, we use FluentBit + Loki. This setup allows us to aggregate all logs into a single location, making it much easier to monitor and visualize them through the dashboard.
When a CustomResource is created:
-
If
debug: false(default):- The enumeration job redirects output to
/logs/job.log - A Fluent Bit sidecar monitors the file
/logs/job.log - Fluent Bit sends logs to Loki with the following labels:
job: name of the Enumerationnamespace: namespace where the pod is runningtool: type of tool used (nmap, nikto, etc.)cluster: name of the cluster (to be configured in Secret)
- The enumeration job redirects output to
-
If
debug: true:- The enumeration job writes directly to stdout
- No sidecar is added
- Logs are available via
kubectl logs
Contains credentials and configurations for Loki:
apiVersion: v1
kind: Secret
metadata:
name: loki-credentials
namespace: kentra-system
type: Opaque
stringData:
loki-host: "loki.k3s.chungo.home"
loki-port: "443"
loki-tls: "true"
loki-tls-verify: "false"
loki-tenant-id: "1"
loki-user: "root-user"
loki-password: "supersecretpassword"
cluster-name: "k3s"Contains the Fluent Bit configuration:
apiVersion: v1
kind: ConfigMap
metadata:
name: fluent-bit-config
namespace: kentra-system
data:
fluent-bit.conf: |
[SERVICE]
Flush 5
Daemon Off
Log_Level info
Parsers_File parsers.conf
[INPUT]
Name tail
Path /logs/*.log
Read_from_Head true
Refresh_Interval 5
Tag kentra.job.*
[FILTER]
Name modify
Match *
Add cluster ${CLUSTER_NAME}
Add component job
Add app kentra
[OUTPUT]
Name loki
Match *
host ${LOKI_HOST}
port ${LOKI_PORT}
tls ${LOKI_TLS}
tls.verify ${LOKI_TLS_VERIFY}
tenant_id ${LOKI_TENANT_ID}
http_user ${LOKI_USER}
http_passwd ${LOKI_PASSWORD}
labels job=${JOB_NAME},namespace=${NAMESPACE},tool=${TOOL_TYPE},cluster=${CLUSTER_NAME}
label_keys job,namespace,tool,clusterApply the configuration files:
kubectl apply -f config/default/loki-secret.yaml
kubectl apply -f config/default/fluent-bit-config.yamlapiVersion: kentra.sh/v1alpha1
kind: Enumeration
metadata:
name: nmap-example
namespace: default
spec:
target: "192.168.1.0/24"
tool: nmap
debug: false # Enables the Fluent Bit sidecar
periodic: falseAfter execution, you can search logs with queries like:
{job="nmap-example", tool="nmap", namespace="default"}
Or filter for errors:
{cluster="k3s", app="kentra"} |= "error"
-
Verify that the Secret
loki-credentialsexists and has the correct values:kubectl describe secret loki-credentials -n kentra-system
-
Verify that the ConfigMap
fluent-bit-configexists:kubectl describe configmap fluent-bit-config -n kentra-system
-
Check the logs of the Fluent Bit sidecar:
kubectl logs <pod> -c fluent-bit-sidecar
- Verify that
loki-hostis reachable from the cluster - Verify that
loki-portis correct - If
loki-tlsistrue, make sure the certificates are valid - If you use a self-signed certificate, leave
loki-tls-verifyasfalse
- Make sure that
debug: falsein your Enumeration - Verify that the enumeration job is creating the file
/logs/job.log
The Fluent Bit sidecar receives the following environment variables:
LOKI_HOST: Loki server hostLOKI_PORT: Loki server portLOKI_TLS: Whether to use TLS ("true" or "false")LOKI_TLS_VERIFY: Verify TLS certificate ("true" or "false")LOKI_TENANT_ID: Tenant ID in LokiLOKI_USER: Username for LokiLOKI_PASSWORD: Password for LokiCLUSTER_NAME: Name of the cluster (from Secret)NAMESPACE: Namespace of the podJOB_NAME: Name of the EnumerationTOOL_TYPE: Type of tool used
Use the following values in the Helm installation:
loki:
enabled: true
minio:
enabled: true
persistence:
size: 10Gi
singleBinary:
persistence:
enabled: true
size: 5Gi