A Kubernetes sidecar that watches HTTPRoute and/or Service resources and dynamically generates a Pangolin blueprint YAML for the newt tunnel daemon.
The sidecar runs alongside newt in the same pod, sharing a volume. It watches HTTPRoutes referencing a configured gateway and/or Services with the appropriate annotations, and writes /etc/newt/blueprint.yaml whenever resources change. newt detects the file change and updates the tunnel accordingly.
┌─────────────────────────────────────────┐
│ Newt Pod │
│ ┌────────────────┐ ┌───────────────┐ │
│ │ newt-sidecar │ │ newt │ │
│ │ (watches │ │ (reads │ │
│ │ HTTPRoutes + │ │ blueprint) │ │
│ │ Services) │ │ │ │
│ └───────┬────────┘ └──────┬────────┘ │
│ │ emptyDir vol │ │
│ └──► blueprint ◄───┘ │
│ .yaml │
└─────────────────────────────────────────┘
Every flag can also be set via an environment variable with the NEWTSC_ prefix. For example, --site-id becomes NEWTSC_SITE_ID. CLI flags take precedence over environment variables.
| Flag | Default | Description |
|---|---|---|
--gateway-name |
"" |
Gateway name to filter HTTPRoutes. When omitted the HTTPRoute controller is disabled |
--gateway-namespace |
"" |
Gateway namespace (empty = any) |
--namespace |
"" |
Watch namespace (empty = all) |
--output |
/etc/newt/blueprint.yaml |
Output blueprint file path |
--site-id |
"" |
Pangolin site nice ID (required) |
--target-hostname |
"" |
Backend gateway hostname (required for HTTPRoute mode) |
--target-port |
443 |
Backend gateway port |
--target-method |
https |
Backend method (http/https/h2c) |
--deny-countries |
"" |
Comma-separated country codes to deny |
--ssl |
true |
Default SSL setting for http/https resources |
--annotation-prefix |
newt-sidecar |
Annotation prefix for per-resource overrides |
--enable-service |
false |
Enable Service discovery (annotation-mode: opt-in via newt-sidecar/enabled: "true") |
--auto-service |
false |
Enable Service discovery (auto-mode: opt-out via newt-sidecar/enabled: "false") |
--all-ports |
false |
Expose all TCP/UDP ports of a Service as individual blueprint entries (global default, overridable per Service via newt-sidecar/all-ports annotation) |
--auth-sso-roles |
"" |
Default comma-separated Pangolin roles for SSO-enabled resources (empty = none) |
--auth-sso-users |
"" |
Default comma-separated user e-mails for SSO-enabled resources (empty = none) |
--auth-sso-idp |
0 |
Default Pangolin IdP ID for auto-login-idp (0 = not set) |
--auth-whitelist-users |
"" |
Default comma-separated user e-mails for whitelist-users (empty = none) |
Both --enable-service and --auto-service activate the Service controller. The difference is the default behaviour: in annotation-mode a Service must explicitly opt in; in auto-mode every Service is processed unless explicitly excluded.
There is deliberately no --auth-sso global flag. SSO must be enabled explicitly per resource via the newt-sidecar/auth-sso annotation so that resources remain public unless opted in.
There are deliberately no global flags for --auth-pincode, --auth-password, or --auth-basic-auth-*. Sensitive auth values must be stored in a Kubernetes Secret and referenced via the newt-sidecar/auth-secret annotation (see Auth via Kubernetes Secret).
Add these to an HTTPRoute to override per-resource behaviour:
| Annotation | Description |
|---|---|
newt-sidecar/enabled: "false" |
Skip this HTTPRoute entirely |
newt-sidecar/name: "Custom Name" |
Override the resource display name |
newt-sidecar/ssl: "false" |
Disable SSL for this resource |
newt-sidecar/host-header: "custom.internal" |
Set the host-header field on the Pangolin resource |
newt-sidecar/headers: '[{"name":"X-Foo","value":"bar"}]' |
JSON array of extra headers to pass to Pangolin |
newt-sidecar/auth-sso: "true" |
Enable SSO authentication |
newt-sidecar/auth-sso-roles: "Member,Developer" |
Comma-separated Pangolin roles allowed (overrides --auth-sso-roles) |
newt-sidecar/auth-sso-users: "[email protected]" |
Comma-separated user e-mails allowed (overrides --auth-sso-users) |
newt-sidecar/auth-sso-idp: "1" |
Pangolin IdP ID for auto-login-idp — skips the Pangolin login page and redirects directly to the IdP (overrides --auth-sso-idp) |
newt-sidecar/auth-whitelist-users: "[email protected]" |
Comma-separated user e-mails for whitelist-users (overrides --auth-whitelist-users) |
newt-sidecar/auth-secret: "my-secret" |
Name of a Kubernetes Secret in the same namespace containing sensitive auth values (see Auth via Kubernetes Secret) |
newt-sidecar/tls-server-name: "backend.internal" |
Override the SNI name for the backend TLS connection (defaults to the HTTPRoute hostname) |
newt-sidecar/maintenance-enabled: "true" |
Enable the Pangolin maintenance block |
newt-sidecar/maintenance-type: "forced" |
Maintenance type: forced or automatic |
newt-sidecar/maintenance-title: "Down for maintenance" |
Maintenance page title |
newt-sidecar/maintenance-message: "Back soon" |
Maintenance page message |
newt-sidecar/maintenance-estimated-time: "2h" |
Estimated maintenance duration |
newt-sidecar/target-path: "/api" |
Path prefix, exact path, or regex pattern for the target |
newt-sidecar/target-path-match: "prefix" |
Path matching type: prefix, exact, or regex |
newt-sidecar/target-rewrite-path: "/" |
Path to rewrite the request to |
newt-sidecar/target-rewrite-match: "stripPrefix" |
Rewrite matching type: exact, prefix, regex, or stripPrefix |
newt-sidecar/target-priority: "200" |
Target priority for load balancing (1–1000, default 100) |
newt-sidecar/target-internal-port: "8080" |
Internal port mapping on the target (1–65535) |
newt-sidecar/target-healthcheck: '{"hostname":"...","port":8080}' |
JSON health check configuration for the target (see Health check annotation) |
newt-sidecar/rules: '[{"action":"deny","match":"ip","value":"10.0.0.0/8"}]' |
JSON array of custom access control rules (see Custom rules) |
newt-sidecar/target-enabled: "true" |
Enable or disable the target: "true"/"1" or "false"/"0" |
The auto-login-idp value is the internal numeric ID Pangolin assigns to each configured Identity Provider. You can find it in two ways:
- Pangolin UI — navigate to Server Admin → Identity Providers, click an IdP to edit it, and read the number from the URL:
.../admin/idp/**1**/general - Pangolin API —
GET /api/v1/idpreturnsidpId,name, andtypefor every configured IdP
Services can be exposed in two modes depending on whether newt-sidecar/full-domain is set.
Pangolin opens a raw TCP or UDP port and tunnels directly to the cluster-internal Service DNS — no Envoy Gateway hop.
| Annotation | Default | Description |
|---|---|---|
newt-sidecar/enabled |
— | "true" to opt in (annotation-mode); "false" to opt out (auto-mode) |
newt-sidecar/all-ports |
--all-ports flag |
"true" to expose all ports as individual entries; "false" to force single-port mode. Overrides the global --all-ports flag |
newt-sidecar/port |
auto | Port number or name to expose (single-port mode only). Required when the Service has more than one port and none is named http |
newt-sidecar/protocol |
from spec | Tunnel protocol override: tcp or udp (single-port mode only). Defaults to the protocol defined in the ServicePort spec |
newt-sidecar/name |
<svc> <port> |
Override the resource display name (single-port mode only) |
Set newt-sidecar/full-domain to switch to HTTP mode. Pangolin exposes the Service at the given public domain over HTTPS. The internal target is the cluster-internal Service DNS name — no Envoy Gateway hop. HTTP mode is not supported in all-ports mode.
| Annotation | Default | Description |
|---|---|---|
newt-sidecar/enabled |
— | "true" to opt in (annotation-mode); "false" to opt out (auto-mode) |
newt-sidecar/full-domain |
— | Public domain to expose (e.g. app.example.com). Activates HTTP mode |
newt-sidecar/port |
auto | Port number or name to expose |
newt-sidecar/method |
http |
Internal protocol to reach the Service: http, https, or h2c |
newt-sidecar/ssl |
--ssl flag |
Enable SSL on the Pangolin resource |
newt-sidecar/name |
<svc> <port> |
Override the resource display name |
newt-sidecar/host-header |
— | Set the host-header field on the Pangolin resource |
newt-sidecar/headers |
— | JSON array of extra headers: [{"name":"X-Foo","value":"bar"}] |
newt-sidecar/auth-sso |
— | "true" to enable SSO authentication |
newt-sidecar/auth-sso-roles |
--auth-sso-roles |
Comma-separated Pangolin roles (overrides global default) |
newt-sidecar/auth-sso-users |
--auth-sso-users |
Comma-separated user e-mails (overrides global default) |
newt-sidecar/auth-sso-idp |
--auth-sso-idp |
Pangolin IdP ID for auto-login-idp (overrides global default) |
newt-sidecar/auth-whitelist-users |
--auth-whitelist-users |
Comma-separated user e-mails for whitelist-users (overrides global default) |
newt-sidecar/auth-secret |
— | Name of a Kubernetes Secret containing sensitive auth values (see below) |
newt-sidecar/tls-server-name |
FullDomain | Override the SNI name for the backend TLS connection |
newt-sidecar/maintenance-enabled |
— | "true" to enable the Pangolin maintenance block |
newt-sidecar/maintenance-type |
— | forced or automatic |
newt-sidecar/maintenance-title |
— | Maintenance page title |
newt-sidecar/maintenance-message |
— | Maintenance page message |
newt-sidecar/maintenance-estimated-time |
— | Estimated maintenance duration |
newt-sidecar/target-path |
— | Path prefix, exact path, or regex pattern for the target |
newt-sidecar/target-path-match |
— | Path matching type: prefix, exact, or regex |
newt-sidecar/target-rewrite-path |
— | Path to rewrite the request to |
newt-sidecar/target-rewrite-match |
— | Rewrite matching type: exact, prefix, regex, or stripPrefix |
newt-sidecar/target-priority |
100 |
Target priority for load balancing (1–1000) |
newt-sidecar/target-internal-port |
— | Internal port mapping on the target (1–65535) |
newt-sidecar/target-healthcheck |
— | JSON health check config for the target (see Health check annotation) |
newt-sidecar/rules |
— | JSON array of custom access control rules (see Custom rules) |
newt-sidecar/target-enabled |
— | Enable or disable the target: "true"/"1" or "false"/"0" |
Single-port mode (default, or newt-sidecar/all-ports: "false"):
When newt-sidecar/port is not set the sidecar selects a port automatically:
- Service has exactly one port → use it
- Service has a port named
http→ use it - Otherwise the Service is skipped with a warning
All-ports mode (--all-ports flag or newt-sidecar/all-ports: "true"):
Every port defined in the Service spec is exposed as a separate blueprint entry. The protocol is read from the ServicePort spec (TCP → tcp, UDP → udp). The newt-sidecar/port, newt-sidecar/protocol, and newt-sidecar/name annotations are ignored in this mode. HTTP mode (newt-sidecar/full-domain) is not supported in all-ports mode.
The per-Service annotation always takes precedence over the global flag, so you can opt individual Services in or out regardless of the global default.
The newt-sidecar/target-healthcheck annotation accepts a JSON object matching the Pangolin healthcheck spec:
annotations:
newt-sidecar/full-domain: "app.example.com"
newt-sidecar/target-healthcheck: |
{
"hostname": "app.default.svc.cluster.local",
"port": 8080,
"enabled": true,
"path": "/health",
"interval": 30,
"timeout": 5,
"method": "GET",
"status": 200
}All fields are optional except hostname and port. The full schema:
| Field | Type | Description |
|---|---|---|
hostname |
string | Hostname to health-check |
port |
number | Port to health-check |
enabled |
boolean | Whether health checking is active (default true) |
path |
string | HTTP path to request |
scheme |
string | Protocol scheme |
mode |
string | Health check mode (default http) |
interval |
number | Seconds between checks (default 30) |
unhealthy-interval |
number | Seconds between checks when unhealthy (default 30) |
timeout |
number | Timeout in seconds (default 5) |
headers |
array | Extra headers: [{"name":"…","value":"…"}] |
follow-redirects |
boolean | Whether to follow redirects (default true) |
method |
string | HTTP method (default GET) |
status |
number | Expected HTTP status code |
The {prefix}/rules annotation accepts a JSON array of custom access control rules. Rules are evaluated in priority order (lower number = higher priority). Each rule has:
action:allow,deny, orpassmatch:cidr,ip,path, orcountryvalue: the match value (CIDR, IP, path pattern, or country code)priority(optional): defaults to 100 if not specified
Example:
annotations:
newt-sidecar/rules: '[{"action":"deny","match":"ip","value":"10.0.0.0/8"},{"action":"allow","match":"path","value":"/admin","priority":10}]'Valid combinations:
match |
value example |
Description |
|---|---|---|
cidr |
10.0.0.0/8 |
CIDR block |
ip |
192.168.1.1 |
Single IP address |
path |
/admin |
Path pattern |
country |
RU |
Country code (2 letters) |
The {prefix}/rules annotation is merged with the --deny-countries flag rules: annotation rules come first, then country-deny rules are appended.
Both resource types — private (Pangolin client access: SSH, RDP, CIDR tunnels) and public (static HTTP tunnel entries) — can be defined as native Kubernetes resources using the CRDs shipped in the newt-sidecar Helm chart. The sidecar watches these cluster-wide and merges them into the generated blueprint.
The CRDs are installed via the newt-sidecar Helm chart (see Kubernetes deployment).
apiVersion: newt-sidecar.home-operations.com/v1alpha1
kind: PrivateResource
metadata:
name: cluster-pods
namespace: network
spec:
name: Cluster Pod Network
mode: cidr
destination: 10.42.0.0/16Spec fields:
| Field | Type | Description |
|---|---|---|
name |
string | Display name in Pangolin |
mode |
string | Tunnel mode: cidr, hostname, etc. |
destination |
string | CIDR block or hostname to tunnel |
site |
string | Site ID override (defaults to --site-id) |
tcp-ports |
string | TCP port range to expose |
udp-ports |
string | UDP port range to expose |
disable-icmp |
bool | Disable ICMP forwarding |
alias |
string | Alias for the resource |
roles |
[]string | Pangolin roles with access |
users |
[]string | User e-mails with access |
machines |
[]string | Machine IDs with access |
PublicResource lets you define static public tunnel entries without an HTTPRoute or Service annotation. It maps directly to the Pangolin public-resources blueprint block and supports both HTTP and TCP/UDP resources — the same fields as annotation-driven resources.
TCP tunnel example (e.g. SSH):
apiVersion: newt-sidecar.home-operations.com/v1alpha1
kind: PublicResource
metadata:
name: forgejo-ssh
namespace: selfhosted
spec:
name: Forgejo SSH
protocol: tcp
proxyPort: 2222
ssl: false
targets:
- hostname: envoy-external.network.svc.cluster.local
port: 2222HTTP tunnel example:
apiVersion: newt-sidecar.home-operations.com/v1alpha1
kind: PublicResource
metadata:
name: my-static-app
namespace: network
spec:
name: My Static App
full-domain: app.example.com
ssl: true
targets:
- hostname: app.default.svc.cluster.local
method: http
port: 8080This is useful for resources with no gateway HTTPRoute — for example raw TCP/UDP ports, services in namespaces you do not watch for HTTPRoutes, or entries you want to manage independently of HTTPRoute lifecycle.
Sensitive auth values — pincode, password, and basic-auth credentials — are never read from annotations. Instead, create a Kubernetes Secret in the same namespace as the resource and reference it with the newt-sidecar/auth-secret annotation.
Well-known Secret keys:
| Key | Auth field |
|---|---|
pincode |
auth.pincode (parsed as integer) |
password |
auth.password |
basic-auth-user |
auth.basic-auth.user |
basic-auth-password |
auth.basic-auth.password |
Example Secret:
apiVersion: v1
kind: Secret
metadata:
name: myapp-auth
namespace: default
stringData:
password: "s3cr3t"Reference it from an HTTPRoute or Service:
annotations:
newt-sidecar/auth-secret: "myapp-auth"The Secret may contain any subset of the well-known keys. Keys that are absent or empty are ignored.
RBAC: the sidecar's ServiceAccount needs
getonsecretsin each watched namespace whenauth-secretis used. See auth-secret RBAC below.
Deploy using the dedicated newt Helm chart. The sidecar runs as a native Kubernetes sidecar (initContainer with restartPolicy: Always, requires K8s 1.29+). An emptyDir volume is shared between the sidecar and newt at /etc/newt. A wait-blueprint init container blocks newt from starting until the sidecar has written the blueprint.
The PrivateResource and PublicResource CRDs are shipped in a separate newt-sidecar chart. Install it before the main deployment.
apiVersion: source.toolkit.fluxcd.io/v1
kind: OCIRepository
metadata:
name: newt-sidecar
spec:
interval: 15m
layerSelector:
mediaType: application/vnd.cncf.helm.chart.content.v1.tar+gzip
operation: copy
ref:
tag: 0.2.1
url: oci://ghcr.io/home-operations/charts/newt-sidecar
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: newt-sidecar-crds
spec:
chartRef:
kind: OCIRepository
name: newt-sidecar
interval: 1hThe newt chart does not yet support custom RBAC rules, so create a ClusterRole manually. Grant access to all resource types the sidecar needs to watch:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: newt-httproute-reader
rules:
- apiGroups:
- gateway.networking.k8s.io
resources:
- httproutes
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- newt-sidecar.home-operations.com
resources:
- privateresources
- publicresources
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: newt-httproute-reader
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: newt-httproute-reader
subjects:
- kind: ServiceAccount
name: newt
namespace: <your-namespace>Remove httproutes or services from the rules if you are not using HTTPRoute or Service discovery respectively.
apiVersion: source.toolkit.fluxcd.io/v1
kind: OCIRepository
metadata:
name: newt
spec:
interval: 15m
layerSelector:
mediaType: application/vnd.cncf.helm.chart.content.v1.tar+gzip
operation: copy
ref:
tag: 1.2.0
url: oci://ghcr.io/home-operations/charts-mirror/newt
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: newt
spec:
chartRef:
kind: OCIRepository
name: newt
interval: 1h
values:
global:
image:
# renovate: datasource=github-releases depName=fosrl/newt
tag: "1.10.3"
rbac:
clusterRole: true
serviceAccount:
automountServiceAccountToken: true
name: newt
newtInstances:
- name: main-tunnel
enabled: true
replicas: 1
auth:
existingSecretName: newt-secret
extraEnv:
BLUEPRINT_FILE: /etc/newt/blueprint.yaml
extraVolumeMounts:
- name: blueprint
mountPath: /etc/newt
extraVolumes:
- name: blueprint
emptyDir: {}
initContainers:
- name: newt-sidecar
image: ghcr.io/home-operations/newt-sidecar:latest
args:
- --gateway-name=<your-gateway>
- --target-hostname=<gateway-svc>.<namespace>.svc.cluster.local
- --deny-countries=RU,CN,KP,IR,BY,IL
- --enable-service
env:
- name: NEWTSC_SITE_ID
valueFrom:
secretKeyRef:
name: newt-secret
key: NEWTSC_SITE_ID
restartPolicy: Always
resources:
limits:
memory: 128Mi
volumeMounts:
- name: blueprint
mountPath: /etc/newt
- name: wait-blueprint
image: busybox
command:
- /bin/sh
- -c
- until test -f /etc/newt/blueprint.yaml; do sleep 1; done
resources:
requests:
cpu: 10m
limits:
memory: 16Mi
volumeMounts:
- name: blueprint
mountPath: /etc/newtThe Pangolin site ID is read from the NEWTSC_SITE_ID environment variable (sourced from a Secret) and passed to the sidecar as --site-id.
When any resource uses newt-sidecar/auth-secret, add a Role in each watched namespace that grants get on Secrets, and bind it to the ServiceAccount:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: newt-secret-reader
namespace: <your-namespace>
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: newt-secret-reader
namespace: <your-namespace>
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: newt-secret-reader
subjects:
- kind: ServiceAccount
name: newt
namespace: <your-namespace>TCP tunnel (e.g. PostgreSQL):
apiVersion: v1
kind: Service
metadata:
name: postgres
namespace: default
annotations:
newt-sidecar/enabled: "true"
spec:
ports:
- name: postgres
port: 5432HTTP tunnel (direct Service, no gateway hop):
apiVersion: v1
kind: Service
metadata:
name: myapp
namespace: default
annotations:
newt-sidecar/enabled: "true"
newt-sidecar/full-domain: "myapp.example.com"
newt-sidecar/name: "My App"
spec:
ports:
- name: http
port: 8080HTTP tunnel with SSO (auto-login to IdP 1, role Member required):
apiVersion: v1
kind: Service
metadata:
name: myapp
namespace: default
annotations:
newt-sidecar/enabled: "true"
newt-sidecar/full-domain: "myapp.example.com"
newt-sidecar/auth-sso: "true"
newt-sidecar/auth-sso-roles: "Member"
newt-sidecar/auth-sso-idp: "1"
spec:
ports:
- name: http
port: 8080HTTP tunnel with password auth (from a Secret):
apiVersion: v1
kind: Secret
metadata:
name: myapp-auth
namespace: default
stringData:
password: "s3cr3t"
---
apiVersion: v1
kind: Service
metadata:
name: myapp
namespace: default
annotations:
newt-sidecar/enabled: "true"
newt-sidecar/full-domain: "myapp.example.com"
newt-sidecar/auth-secret: "myapp-auth"
spec:
ports:
- name: http
port: 8080All-ports TCP/UDP tunnel (e.g. expose every port of a multi-port Service):
apiVersion: v1
kind: Service
metadata:
name: gameserver
namespace: default
annotations:
newt-sidecar/enabled: "true"
newt-sidecar/all-ports: "true"
spec:
ports:
- name: tcp-game
port: 7777
protocol: TCP
- name: udp-game
port: 7778
protocol: UDP