diff --git a/docs/oss/building-features/node-renderer/container-deployment.md b/docs/oss/building-features/node-renderer/container-deployment.md index 2afe3b2529..c761d9d3f4 100644 --- a/docs/oss/building-features/node-renderer/container-deployment.md +++ b/docs/oss/building-features/node-renderer/container-deployment.md @@ -129,6 +129,106 @@ end > **Recommendation:** Start with a single container. Move to sidecar containers if you need per-process memory/CPU visibility (e.g., to diagnose OOM restarts). Separate workloads are rarely justified unless you have a specific need for independent scaling at high replica counts. +## Control Plane Deployment Shapes + +For Control Plane deployments, choose the probe target based on where the node renderer runs. Control Plane configures +probes per container. Renderer probe targets below mean `tcpSocket` or h2c-aware `exec` probes, not HTTP/1.1 `httpGet` +probes directly against the renderer. + +[Control Plane Flow](https://github.com/shakacode/control-plane-flow)'s default `rails` template models Rails as a +single-container standard workload. If you follow that template and run the renderer inside the Rails container, +configure the Rails workload's probes rather than looking for a separate node-renderer container. If you split the +renderer into its own container or workload, add renderer-specific probes there. + +### Same Rails Container Or Process Supervisor + +Set the Rails `renderer_url` to `http://localhost:3800`. The renderer can keep the default `localhost` host binding. +Probe the `rails` container's Rails health endpoint, such as `/up` on port `3000` in Rails 7.1+ or a custom endpoint in +earlier Rails versions. + +When Rails and the renderer share one container, use one combined Rails health endpoint if you need to check both +processes. For example, make the Rails readiness endpoint perform a short TCP connection check to `localhost:3800` and +return `503` if the renderer is unreachable. + +Because this guide covers React on Rails Pro's Node Renderer, the Rails endpoint below reads the same +`ReactOnRailsPro.configuration.renderer_url` value used for SSR requests rather than requiring a second port environment +variable. + +`config/routes.rb`: + +```ruby +# Override Rails 7.1+'s built-in /up route to add the renderer TCP check. +# If you already have custom /up logic, use a distinct path such as /healthz +# to avoid silently replacing existing health behavior. +get "up", to: "health#show" +``` + +`app/controllers/health_controller.rb`: + +```ruby +# Ruby stdlib; loaded explicitly for the URI/Socket readiness check. +require "socket" +require "uri" + +# Inherits from ActionController::Base (not ApplicationController) to avoid +# app-level authentication callbacks on unauthenticated probe requests. +class HealthController < ActionController::Base + def show + # Opens and immediately closes; raises if the renderer port is unreachable. + # A successful TCP connection means the h2c listener is bound, not that + # cluster workers are ready. Pair with the startup probe to shield liveness. + # In this same-container topology, Rails and the renderer share a network namespace. + # Probe localhost even if other deployment shapes use a service host. + # URI#port returns 80/443 defaults when omitted, so detect an explicit :port first. + # connect_timeout is supported by the Ruby versions in this guide's prerequisites. + renderer_url = ReactOnRailsPro.configuration.renderer_url + raise ArgumentError, "renderer_url not configured" if renderer_url.nil? || renderer_url.empty? + + # Missing or malformed renderer_url raises and surfaces as a 500 so configuration mistakes stay visible. + renderer_uri = URI.parse(renderer_url) + renderer_port = renderer_url.match?(/:\d+(?:[\/?#]|$)/) ? renderer_uri.port : 3800 + Socket.tcp("localhost", renderer_port, connect_timeout: 1) {} + head :ok + rescue SocketError, SystemCallError + head :service_unavailable + end +end +``` + +> **Topology-specific:** This same-container example always probes `localhost` and only borrows the port from +> `renderer_url`. Do not reuse it as-is for sidecar or separate-workload topologies where the renderer runs behind a +> different host. +> +> Configuration mistakes such as a missing or malformed `renderer_url` are allowed to surface as 500 errors so they are +> visible in logs and alerting. Only renderer reachability failures are converted to `503`. + +### Separate Container In The Same Workload + +Keep the Rails `renderer_url` as `http://localhost:3800`. Use `0.0.0.0` for the renderer `host` when you rely on +`tcpSocket` probes; `localhost` is fine for `exec`-only probes. + +Add h2c-aware `exec` probes against `localhost:3800` or `tcpSocket` probes on the renderer port. For `tcpSocket`, bind the +renderer to `0.0.0.0` because Kubernetes and platform TCP probes originate from outside the container and connect to the +pod or workload IP, not container-local loopback. `exec` probes run a command inside the container, so `localhost` works +there. + +> **Probe YAML:** For Control Plane readiness and liveness fields, reuse the individual `exec` or `tcpSocket` probe blocks +> from [Kubernetes Sidecar Manifest](#kubernetes-sidecar-manifest). Attach them to the node-renderer container in this +> workload instead of to a separate Kubernetes pod spec. + +### Separate Node-Renderer Workload + +Set the Rails `renderer_url` to `http://..cpln.local:3800`, use `0.0.0.0` for the renderer +`host`, and add `tcpSocket` or h2c-aware `exec` probes to the node-renderer workload container. Expose the renderer port +internally, not publicly, unless required. + +Use the same Control Plane probe fields as the same-workload case, but attach them to the separate node-renderer workload +container. + +Replace `` with the renderer workload name and `` with your Control Plane Global Virtual Cloud +name. Use your actual renderer port if it is not `3800`; see Control Plane's +[service-to-service endpoint format](https://docs.controlplane.com/guides/service-to-service). + ## Dockerfile Example > **Why the renderer entry point lives in a dedicated `renderer/` directory:** Production Docker builds commonly strip JavaScript sources after the client bundles are built, since the Rails app no longer needs them at runtime. Keeping the renderer entry point in its own top-level directory (separate from `client/`) makes it trivial to exclude from that cleanup — the Node Renderer process still needs its entry file and dependencies at runtime. @@ -208,7 +308,9 @@ services: RENDERER_HOST: '0.0.0.0' NODE_OPTIONS: '--max-old-space-size=512' healthcheck: - test: ['CMD', 'curl', '-sf', '--http2-prior-knowledge', 'http://localhost:3800/info'] + # --max-time 2 leaves a 1 s buffer below the 3 s orchestrator timeout so curl exits + # cleanly with a non-zero code rather than being killed mid-request. + test: ['CMD', 'curl', '-sf', '--max-time', '2', '--http2-prior-knowledge', 'http://localhost:3800/info'] interval: 5s timeout: 3s retries: 5 @@ -216,6 +318,8 @@ services: ``` > **Note:** In Docker Compose, the containers do not share a network namespace (unlike Kubernetes sidecars), so the renderer must bind to `0.0.0.0` and Rails must connect via the service name (`renderer`). +> The Compose example uses `--max-time 2` with `timeout: 3s` for fast local feedback; the Kubernetes examples use +> `--max-time 3` with `timeoutSeconds: 5` to allow more scheduler and node-load jitter. ## Host Binding for Container Environments @@ -388,7 +492,7 @@ During container startup, you may see `ERR_STREAM_PREMATURE_CLOSE` errors from F **Mitigation:** -1. **Health check endpoint** — The Node Renderer exposes a built-in `/info` endpoint that returns the node version and renderer version. Because the renderer uses cleartext HTTP/2, Kubernetes `httpGet` probes (HTTP/1.1) are incompatible with this listener. Use a TCP probe, an `exec` probe (for example with `curl --http2-prior-knowledge`, which requires curl with HTTP/2 support in your container image), or a dedicated HTTP/1.1 sidecar/port for probes. For a custom `/health` route with more granular checks, use the `configureFastify()` option (see [JS Configuration: Custom Fastify Configuration](./js-configuration.md#custom-fastify-configuration)). Configure your container orchestrator to wait for it before routing traffic. +1. **Health check endpoint** — The Node Renderer exposes a built-in `/info` endpoint that returns the node version and renderer version. Because the renderer uses cleartext HTTP/2, Kubernetes `httpGet` probes (HTTP/1.1) are incompatible with this listener. Use a TCP probe, an `exec` probe with an h2c-aware client such as `curl --http2-prior-knowledge`, or a dedicated HTTP/1.1 sidecar/port for probes. For a custom `/health` route with more granular checks, use the `configureFastify()` option (see [JS Configuration: Adding a Health Check Endpoint](./js-configuration.md#adding-a-health-check-endpoint)). Configure your container orchestrator to wait for it before routing traffic. 2. **Startup probe** — Configure a startup probe with a generous `initialDelaySeconds`: ```yaml startupProbe: @@ -397,30 +501,105 @@ During container startup, you may see `ERR_STREAM_PREMATURE_CLOSE` errors from F initialDelaySeconds: 10 periodSeconds: 5 failureThreshold: 6 + timeoutSeconds: 1 ``` 3. **Readiness probe** — Ensure traffic is only routed to the renderer when it's ready to accept requests. Prefer an `exec` probe with an h2c-aware client for application-level readiness. Use `tcpSocket` only as a minimal fallback that confirms the port is accepting connections: + ```yaml readinessProbe: exec: command: - curl - -sf + - --max-time + - '3' - --http2-prior-knowledge - http://localhost:3800/info timeoutSeconds: 5 periodSeconds: 5 failureThreshold: 3 ``` - > **Note:** The `exec` probe requires curl with HTTP/2 support in your image. Verify with `curl --version | grep HTTP2`. If curl is unavailable, use `tcpSocket` as a fallback. -4. **Liveness probe** — Ensure the renderer is restarted if it becomes unresponsive: + + > **Notes:** + > + > - The YAML uses `/info` so it works before custom Fastify routes exist. Replace `/info` with `/health` after + > registering that route via `configureFastify` if readiness should wait for renderer-specific warm-up checks. + > - Before upgrading an existing readiness probe, keep curl's `--max-time` lower than `timeoutSeconds`. If switching + > from `tcpSocket` to `exec`, verify curl HTTP/2 support in the image first. + > - See the probe command notes below for curl HTTP/2 support, `--max-time`, loaded-node buffers, and + > `initialDelaySeconds` guidance. + + > **Readiness fallback option:** If curl lacks HTTP/2 support in your image, replace that `readinessProbe` with this + > `tcpSocket` block. This checks port reachability, not application-level readiness: + > + > ```yaml + > readinessProbe: + > tcpSocket: + > port: 3800 + > # TCP handshakes should complete quickly; exec/H2 uses timeoutSeconds: 5. + > timeoutSeconds: 1 + > periodSeconds: 5 + > failureThreshold: 3 + > ``` + +4. **Liveness probe** — Ensure the renderer is restarted after hard listener or container failures. Prefer `tcpSocket` + for liveness so transient CPU or GC pauses do not trigger an HTTP round-trip failure and restart an otherwise + recoverable renderer: + ```yaml livenessProbe: + # Add initialDelaySeconds here if no startupProbe is configured. + # Kubernetes 1.20+ defers readiness/liveness until the startupProbe succeeds. tcpSocket: port: 3800 + # TCP handshakes should complete quickly; exec/H2 uses timeoutSeconds: 5. + timeoutSeconds: 1 periodSeconds: 10 failureThreshold: 3 ``` + > **Stricter liveness option:** If you need liveness to catch a blocked Node.js event loop, and you have verified curl + > HTTP/2 support in the image, you can use an h2c-aware `exec` probe with a short `--max-time`. Keep external + > dependency checks out of liveness; use readiness for dependency or warm-up gates. + > + > ```yaml + > livenessProbe: + > # Add initialDelaySeconds here if no startupProbe is configured. + > # Kubernetes 1.20+ defers readiness/liveness until the startupProbe succeeds. + > # Requires curl with HTTP/2 support (verify: curl --version | grep -i http2). + > exec: + > command: + > - curl + > - -sf + > - --max-time + > - '3' + > - --http2-prior-knowledge + > - http://localhost:3800/info + > timeoutSeconds: 5 + > periodSeconds: 10 + > failureThreshold: 3 + > ``` + + > **Notes:** + > + > - Keep `/info` as the optional `exec` liveness endpoint. Only substitute `/health` if that route avoids external + > dependency checks and readiness gates. + > - See the probe command notes below for curl HTTP/2 support, `--max-time`, loaded-node buffers, and + > `initialDelaySeconds` guidance. + +> [!NOTE] +> **Probe command notes:** `exec` probes require curl with HTTP/2 support in your image. Verify with +> `curl --version | grep -i http2`; if unavailable, use `tcpSocket` as a fallback. Set curl `--max-time` shorter than the +> orchestrator timeout so curl returns a clean non-zero exit code before Kubernetes terminates the probe process. These +> examples use `--max-time 3` with `timeoutSeconds: 5` for `exec` probes, leaving a 2-second buffer. Readiness and +> liveness omit `initialDelaySeconds` because Kubernetes 1.20+ (startup probe GA) defers them until the startup probe +> succeeds. If you skip the startup probe or run an older cluster without startup probe support, add an appropriate +> `initialDelaySeconds`. + +> **Security:** `/info` is unauthenticated even when `password` is configured. Keep the renderer on `localhost` or +> private networking if exposing node and renderer version details is a concern; see +> [Built-in Endpoints](./js-configuration.md#built-in-endpoints). + ### OOM Tracking Distinguish between Rails and Node Renderer OOM kills by checking container-level exit codes: @@ -451,6 +630,10 @@ In production, `logLevel: 'warn'` is sufficient unless actively debugging. A complete pod spec for the sidecar pattern: +> [!NOTE] +> The manifest uses an h2c-aware `exec` probe for readiness and a `tcpSocket` probe for liveness. Keep that split unless +> you intentionally need stricter liveness detection and have verified curl HTTP/2 support in the image. + ```yaml apiVersion: apps/v1 kind: Deployment @@ -513,23 +696,36 @@ spec: initialDelaySeconds: 10 periodSeconds: 5 failureThreshold: 6 + timeoutSeconds: 1 readinessProbe: + # Add initialDelaySeconds here if no startupProbe is configured. + # Kubernetes 1.20+ defers readiness/liveness until the startupProbe succeeds. exec: command: - curl - -sf + - --max-time + - '3' - --http2-prior-knowledge - http://localhost:3800/info timeoutSeconds: 5 periodSeconds: 5 failureThreshold: 3 livenessProbe: + # Add initialDelaySeconds here if no startupProbe is configured. + # Kubernetes 1.20+ defers readiness/liveness until the startupProbe succeeds. tcpSocket: port: 3800 + # TCP handshakes should complete quickly; exec/H2 uses timeoutSeconds: 5. + timeoutSeconds: 1 periodSeconds: 10 failureThreshold: 3 ``` +> **Readiness endpoint:** The manifest uses `/info` for copy-paste safety because that endpoint is built in. Replace +> `/info` with `/health` in the readiness probe after registering that route via `configureFastify` if readiness should +> wait for renderer-specific warm-up checks. + > **Note:** Both containers use the same Docker image, ensuring the React on Rails gem and Node Renderer package versions are always aligned. ## Troubleshooting diff --git a/docs/oss/building-features/node-renderer/js-configuration.md b/docs/oss/building-features/node-renderer/js-configuration.md index 2a3c079066..f5190a9a3b 100644 --- a/docs/oss/building-features/node-renderer/js-configuration.md +++ b/docs/oss/building-features/node-renderer/js-configuration.md @@ -102,22 +102,43 @@ And add a root-level script to the `scripts` section of your `package.json` Run the renderer with `pnpm run node-renderer` (or the equivalent `npm`/`yarn` command for your app). -## Custom Fastify Configuration +## Built-in Endpoints -For advanced use cases, you can customize the Fastify server instance by importing the `master` and `worker` modules directly. This is useful for: +The React on Rails Pro node renderer registers `/info` as a plain `GET` route outside the authenticated render and asset +endpoints, and it does not use the render or asset authentication prechecks, so it remains accessible without the renderer +password even when `password` is configured. The route returns `node_version` and `renderer_version`. Treat it as a +shallow process check and keep the renderer on `localhost` or private networking if those runtime version details should +not be exposed. -- Adding custom routes (e.g., `/health` for container health checks) -- Registering Fastify plugins -- Adding custom hooks for logging or monitoring +Verify it locally: -### Adding a Health Check Endpoint +```bash +curl -s --http2-prior-knowledge http://localhost:3800/info +``` + +Example response: -When running the node-renderer in Docker or Kubernetes, you may need a `/health` endpoint for container health checks: +```json +{ + "node_version": "v20.17.0", + "renderer_version": "1.4.2" +} +``` + +## Custom Fastify Configuration + +For advanced use cases, such as adding custom routes, registering Fastify plugins, or hooking into the request lifecycle, +you can configure the Fastify server directly by importing the `master` and `worker` modules instead of using +`reactOnRailsProNodeRenderer`. The advanced examples below use ES modules for readability. If you want this file to keep running as `node renderer/node-renderer.js`, either keep using the CommonJS pattern shown in the simple example above or switch the file to `.mjs` or `"type": "module"`. +### Adding a Health Check Endpoint + +A common need is a `/health` endpoint for container health checks: + ```js import masterRun from 'react-on-rails-pro-node-renderer/master'; import run, { configureFastify } from 'react-on-rails-pro-node-renderer/worker'; @@ -129,8 +150,9 @@ const config = { // Register a custom health check route configureFastify((app) => { - app.get('/health', (request, reply) => { - reply.send({ status: 'ok' }); + app.get('/health', () => { + // Return a Promise or use async/await if warm-up checks involve async operations. + return { status: 'ok' }; }); }); @@ -143,6 +165,21 @@ if (cluster.isPrimary) { } ``` +The sample `/health` route is intentionally shallow and omits handler parameters because it does not need them. Fastify +also passes `request` and `reply` to handlers if you need to inspect headers, set status codes, or customize the +response. Add warm-up or readiness-gate logic inside this handler if readiness should wait for renderer-specific +initialization. To signal not-ready while keeping Fastify's return-value style, add `reply` to the handler parameters, +set the status with `reply.code(503)`, and return `{ status: 'warming_up' }` from that branch. Do not call `reply.send()` +and then return another response object. The `-f` flag in `curl -sf` causes curl to exit non-zero for HTTP 4xx/5xx +responses, so a `503` from this handler correctly fails the probe. Kubernetes exec probes treat any non-zero curl exit +code as a failure; the response body is irrelevant to probe semantics, so you can return whatever payload is useful for +debugging, such as `{ status: 'ok', workers: 4 }`. + +Routes registered with `configureFastify` do not automatically use the renderer's render and asset authentication +prechecks. A custom `/health` route like the one above is reachable without the renderer password unless you add your own +Fastify authentication. Keep probe routes shallow and non-sensitive, and keep the renderer on `localhost` or private +networking. + ### Registering Fastify Plugins You can also register Fastify plugins. This example assumes you're using the same cluster setup pattern shown above: @@ -177,3 +214,97 @@ configureFastify((app) => { ### API Stability The `./master` and `./worker` exports provide direct access to the node-renderer internals. While we strive to maintain backwards compatibility, these are considered advanced APIs. If you only need basic configuration, prefer using the standard `reactOnRailsProNodeRenderer` function with the configuration options documented above. + +## Configuring Startup, Readiness, and Liveness Probes + +Keep the three probe types distinct: + +- **Startup** answers whether the renderer has finished booting. Separate it from readiness and liveness so slow startup + does not cause premature restarts or block traffic. +- **Readiness** answers whether the renderer should receive new render requests. Use an application-level endpoint such + as the `/health` route in [Adding a Health Check Endpoint](#adding-a-health-check-endpoint), or the built-in `/info` + endpoint for a shallow process check. +- **Liveness** answers whether the renderer is stuck badly enough that restarting the container is safer. Prefer + `tcpSocket` as the default so transient CPU or GC pauses do not restart an otherwise recoverable renderer; use an + h2c-aware `exec` check only when you intentionally need stricter hung-process detection. + +Only the custom `/health` route requires `configureFastify`; `tcpSocket` probes and `/info` checks work without custom +Fastify setup. The health check route should return `200 OK` when the process can accept probe traffic. + +> **Security note:** See [Built-in Endpoints](#built-in-endpoints) for the note on `/info` exposing runtime version +> details. + +Do not put Rails, database, Redis, or other external dependency checks in the node-renderer's liveness probe. A +temporary dependency outage should not restart every renderer replica. If SSR must be available before Rails receives +traffic, make the Rails readiness endpoint perform a short renderer check. + +The renderer listens with cleartext HTTP/2 (h2c). Do not configure a Kubernetes `httpGet` probe, Control Plane HTTP +probe, or any other HTTP/1.1-only probe directly against the renderer port; those probes are rejected by the h2c +listener. Use one of these probe styles instead: + +| Probe style | When to use it | +| ------------ | ----------------------------------------------------------------------------------------------------------------------------------- | +| `tcpSocket` | Startup checks, default liveness checks, and fallback readiness when curl with HTTP/2 support is unavailable. | +| `exec` probe | Application-level readiness and optional stricter liveness checks with an h2c-aware client, such as `curl --http2-prior-knowledge`. | +| HTTP/1.1 | Only if you probe Rails, a separate HTTP/1.1 health sidecar/port, or another endpoint that is not the renderer h2c listener. | + +A passing `tcpSocket` probe means the h2c listener has bound to the port; cluster workers might still be warming up. +Keep an application-level readiness probe if traffic should wait for worker initialization. + +For Kubernetes and platform `tcpSocket` probes, set the renderer `host` to `0.0.0.0` because those probes connect to the +pod or workload IP, not container-local loopback. The default `localhost` binding is fine for `exec` probes that run +inside the renderer container. + +For liveness, start with `tcpSocket`. A fully blocked Node.js event loop may still accept TCP connections and pass that +check, so use an h2c-aware `exec` liveness probe with a short `--max-time` only if you explicitly need stricter +hung-process detection and have verified curl HTTP/2 support in the image. + +> **Note:** The `exec` probe requires curl with HTTP/2 support. Verify with `curl --version | grep -i http2`. If unavailable, +> use a `tcpSocket` probe as a fallback. + +Recommended starting values: + +- **Startup**: Use `tcpSocket` on the renderer port (`3800` by default; use your configured `RENDERER_PORT` value if + different). TCP is enough here because readiness below gates traffic; startup only shields liveness during boot. Start + with `initialDelaySeconds: 10` (first check fires at 10 s; the sixth and final failure fires at + `10 + ((6 - 1) * 5) = 35 s` after container start), `periodSeconds: 5`, `failureThreshold: 6`, and the Kubernetes + default `timeoutSeconds: 1` for a TCP connection check. +- **Readiness (custom route)**: Use `exec` with + `curl -sf --max-time 3 --http2-prior-knowledge http://localhost:3800/health` after registering the route with + [`configureFastify`](#adding-a-health-check-endpoint). Start with `timeoutSeconds: 5`, `periodSeconds: 5`, and + `failureThreshold: 3`. +- **Readiness (built-in info)**: Use `exec` with + `curl -sf --max-time 3 --http2-prior-knowledge http://localhost:3800/info`. Use the same timing settings as the + custom-route readiness probe. `/info` is unauthenticated and exposes runtime version details; see the + [security note](#built-in-endpoints) and keep the renderer on private networking. +- **Readiness fallback**: Use `tcpSocket` on the renderer port only if curl with HTTP/2 support is unavailable. This + checks port reachability, not application readiness. +- **Liveness**: Use `tcpSocket` on the renderer port as the default. Start with `timeoutSeconds: 1`, + `periodSeconds: 10`, and `failureThreshold: 3`, matching the Container Deployment examples. Raise + `failureThreshold`, and optionally `periodSeconds`, if hard listener checks restart the container too aggressively in + your environment. +- **Optional stricter liveness**: Use + `curl -sf --max-time 3 --http2-prior-knowledge http://localhost:3800/info` only when you need liveness to catch a + blocked event loop and have verified curl has HTTP/2 support in the image. Keep external dependency and warm-up checks + in readiness, not liveness. + +Substitute `3800` with your actual renderer port in Kubernetes YAML `exec` arrays; shell variable expansion +does not apply there. See the `port` option at the top of this page for Heroku or Control Plane. + +> **Note (startup window):** With these values, the first check fires at `initialDelaySeconds` (10 s), then every +> `periodSeconds` (5 s) thereafter, and the container restarts only if all six consecutive startup checks fail. Increase +> `failureThreshold` or `periodSeconds` if startup regularly takes longer. +> The 10-second initial delay only shifts when the first check fires. Omitting it starts checks immediately; the +> failure window still comes from `failureThreshold * periodSeconds`. Reduce `initialDelaySeconds` if your renderer +> reliably opens the port within 1-2 seconds, or keep it to avoid noisy early-failure log entries. + +Readiness and liveness omit `initialDelaySeconds` here because Kubernetes 1.20+ (startup probe GA) defers them until +the startup probe succeeds. If you skip the startup probe or run an older cluster without startup probe support, add an +appropriate `initialDelaySeconds` to each. + +See [Node Renderer: Container Deployment](./container-deployment.md#startup-errors-err_stream_premature_close) for full +Kubernetes YAML examples and the shared probe command notes for curl HTTP/2 support, `--max-time` buffers, and +`initialDelaySeconds` guidance. + +For Control Plane topology-specific `renderer_url`, host binding, and probe target guidance, see +[Control Plane Deployment Shapes](./container-deployment.md#control-plane-deployment-shapes).