Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .env
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ JAEGERTRACING_IMAGE=jaegertracing/jaeger:2.14.1
OPENSEARCH_IMAGE=opensearchproject/opensearch:3.5.0
OPENSEARCH_DOCKERFILE=./src/opensearch/Dockerfile
POSTGRES_IMAGE=postgres:17.8
PROMETHEUS_IMAGE=quay.io/prometheus/prometheus:v3.9.1
PROMETHEUS_IMAGE=quay.io/prometheus/prometheus:v3.11.1
VALKEY_IMAGE=valkey/valkey:9.0.2-alpine3.23
TRACETEST_IMAGE=kubeshop/tracetest:${TRACETEST_IMAGE_VERSION}

Expand Down
8 changes: 8 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,14 @@ the release.
* [load-generator] Wait for Roof Binoculars image to load in web tasks, and fix
task failures due to missing `tracer` attribute
([#3171](https://github.com/open-telemetry/opentelemetry-demo/pull/3171))
* [prometheus] Use PromQL `info()` function instead of resource attribute
promotion. Resource attributes are now accessed via `target_info` at query
time, reducing metric cardinality. Requires Prometheus v3.10.0+ with
`--enable-feature=promql-experimental-functions`. Grafana dashboards (APM,
PostgreSQL, OpenTelemetry Collector) and the cart service alert are updated
to use `info()`. Adds Kubernetes deployment support with Helm values and
deploy scripts.
([#2869](https://github.com/open-telemetry/opentelemetry-demo/pull/2869))

## 2.2.0

Expand Down
7 changes: 4 additions & 3 deletions docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -147,7 +147,7 @@ services:
- GOMEMLIMIT=16MiB
- OTEL_EXPORTER_OTLP_ENDPOINT
- OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE
- OTEL_RESOURCE_ATTRIBUTES=${OTEL_RESOURCE_ATTRIBUTES},service.criticality=critical
- OTEL_RESOURCE_ATTRIBUTES=${OTEL_RESOURCE_ATTRIBUTES},service.criticality=critical,service.instance.id=checkout
- OTEL_SERVICE_NAME=checkout
depends_on:
cart:
Expand Down Expand Up @@ -500,7 +500,7 @@ services:
- GOMEMLIMIT=16MiB
- OTEL_EXPORTER_OTLP_ENDPOINT
- OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE
- OTEL_RESOURCE_ATTRIBUTES=${OTEL_RESOURCE_ATTRIBUTES},service.criticality=high
- OTEL_RESOURCE_ATTRIBUTES=${OTEL_RESOURCE_ATTRIBUTES},service.criticality=high,service.instance.id=product-catalog
- OTEL_SERVICE_NAME=product-catalog
- OTEL_CONFIG_FILE=/otel-config.yml
- OTEL_SEMCONV_STABILITY_OPT_IN=database
Expand Down Expand Up @@ -672,7 +672,7 @@ services:
- FLAGD_OTEL_COLLECTOR_URI=${OTEL_COLLECTOR_HOST}:${OTEL_COLLECTOR_PORT_GRPC}
- FLAGD_METRICS_EXPORTER=otel
- GOMEMLIMIT=60MiB
- OTEL_RESOURCE_ATTRIBUTES=${OTEL_RESOURCE_ATTRIBUTES},service.criticality=low
- OTEL_RESOURCE_ATTRIBUTES=${OTEL_RESOURCE_ATTRIBUTES},service.criticality=low,service.instance.id=flagd
- OTEL_SERVICE_NAME=flagd
command: [
"start",
Expand Down Expand Up @@ -911,6 +911,7 @@ services:
- --web.route-prefix=/
- --web.enable-otlp-receiver
- --enable-feature=exemplar-storage
- --enable-feature=promql-experimental-functions
volumes:
- ./src/prometheus/prometheus-config.yaml:/etc/prometheus/prometheus-config.yaml
deploy:
Expand Down
68 changes: 68 additions & 0 deletions kubernetes/deploy-kind.sh
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All of these files in Kubernetes should be removed. Ultimately we are moving all K8s support to be based on using our Helm chart. The existing manifest file here also needs to be removed since this folder really just causes overall confusion to other contributors.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, thanks for making me aware.

Copy link
Copy Markdown
Author

@aknuds1 aknuds1 Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought about it. The kubernetes/deploy-kind.sh script is useful for me to install into a local k8s cluster and test my changes. It's already based around Helm, what's the argument against it? One would think it's generally useful for testing OTel demo in k8s mode?

Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
#!/bin/sh
# Copyright The OpenTelemetry Authors
# SPDX-License-Identifier: Apache-2.0

# Deploy OpenTelemetry Demo to a local Kind cluster
#
# This script creates a Kind cluster and delegates the actual deployment
# to deploy.sh with Kind-specific values.
#
# Prerequisites:
# - kind: https://kind.sigs.k8s.io/docs/user/quick-start/#installation
# - kubectl
# - helm

set -e

CLUSTER_NAME="${CLUSTER_NAME:-otel-demo}"
KUBE_CONTEXT="kind-${CLUSTER_NAME}"
export NAMESPACE="${NAMESPACE:-otel-demo}"
export RELEASE_NAME="${RELEASE_NAME:-opentelemetry-demo}"
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"

# Check prerequisites
command -v kind >/dev/null 2>&1 || { echo "Error: kind is not installed. See https://kind.sigs.k8s.io/docs/user/quick-start/#installation"; exit 1; }
command -v kubectl >/dev/null 2>&1 || { echo "Error: kubectl is not installed."; exit 1; }
command -v helm >/dev/null 2>&1 || { echo "Error: helm is not installed."; exit 1; }

echo "=== OpenTelemetry Demo on Kind ==="
echo "Cluster: $CLUSTER_NAME"
echo ""

# Create Kind cluster if it doesn't exist
if ! kind get clusters 2>/dev/null | grep -q "^${CLUSTER_NAME}$"; then
echo "Creating Kind cluster '$CLUSTER_NAME'..."
kind create cluster --config "$SCRIPT_DIR/kind-config.yaml" --name "$CLUSTER_NAME"
echo ""
else
echo "Kind cluster '$CLUSTER_NAME' already exists."
echo ""
fi

# Deploy using the shared script with Kind-specific values
"$SCRIPT_DIR/deploy.sh" \
--context "$KUBE_CONTEXT" \
-f "$SCRIPT_DIR/values-kind.yaml" \
--timeout 10m

# Wait for pods
echo ""
echo "Waiting for pods to be ready..."
kubectl --context "$KUBE_CONTEXT" wait --for=condition=ready \
pod -l app.kubernetes.io/instance="$RELEASE_NAME" \
--namespace "$NAMESPACE" --timeout=5m 2>/dev/null || true

echo ""
echo "Access the demo:"
echo " Frontend: http://localhost:8080 (via Kind NodePort)"
echo ""
echo "For Grafana, Prometheus, Jaeger use port-forward:"
echo " kubectl --context $KUBE_CONTEXT port-forward svc/grafana 3000:80 -n $NAMESPACE"
echo " kubectl --context $KUBE_CONTEXT port-forward svc/prometheus 9090:9090 -n $NAMESPACE"
echo " kubectl --context $KUBE_CONTEXT port-forward svc/jaeger 16686:16686 -n $NAMESPACE"
echo ""
echo "View pods:"
echo " kubectl --context $KUBE_CONTEXT get pods -n $NAMESPACE"
echo ""
echo "Delete cluster when done:"
echo " kind delete cluster --name $CLUSTER_NAME"
124 changes: 124 additions & 0 deletions kubernetes/deploy.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,124 @@
#!/bin/sh
# Copyright The OpenTelemetry Authors
# SPDX-License-Identifier: Apache-2.0

# Deploy OpenTelemetry Demo to a Kubernetes cluster
#
# This script:
# 1. Installs/upgrades the Helm chart with info() function values
# 2. Deploys custom Grafana dashboards that use the info() function
#
# Usage:
# kubernetes/deploy.sh --context kind-otel-demo
# kubernetes/deploy.sh --context kind-otel-demo -f kubernetes/values-kind.yaml
#
# The --context argument is required and passed to both kubectl and helm.
# All other arguments are passed to helm upgrade.
#
# Environment variables:
# NAMESPACE - Kubernetes namespace (default: otel-demo)
# RELEASE_NAME - Helm release name (default: opentelemetry-demo)

set -e

NAMESPACE="${NAMESPACE:-otel-demo}"
RELEASE_NAME="${RELEASE_NAME:-opentelemetry-demo}"
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
REPO_ROOT="$(dirname "$SCRIPT_DIR")"

# Parse --context argument
KUBE_CONTEXT=""
HELM_ARGS=""
while [ $# -gt 0 ]; do
case "$1" in
--context)
KUBE_CONTEXT="$2"
shift 2
;;
*)
HELM_ARGS="$HELM_ARGS $1"
shift
;;
esac
done

if [ -z "$KUBE_CONTEXT" ]; then
echo "Error: --context is required"
echo "Usage: $0 --context <kube-context> [helm args...]"
exit 1
fi

KUBECTL="kubectl --context $KUBE_CONTEXT"
HELM_CONTEXT="--kube-context $KUBE_CONTEXT"

echo "=== Deploying OpenTelemetry Demo ==="
echo "Namespace: $NAMESPACE"
echo "Release: $RELEASE_NAME"
echo "Context: $KUBE_CONTEXT"
echo ""

# Add Helm repo if not already added
echo "Adding Helm repository..."
helm repo add --force-update open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts

# Create namespace if it doesn't exist
$KUBECTL create namespace "$NAMESPACE" --dry-run=client -o yaml | $KUBECTL apply -f -

# Install/upgrade the Helm chart
echo ""
echo "Installing/upgrading Helm chart..."
# shellcheck disable=SC2086
helm upgrade --install "$RELEASE_NAME" open-telemetry/opentelemetry-demo \
$HELM_CONTEXT \
--namespace "$NAMESPACE" \
-f "$SCRIPT_DIR/values.yaml" \
$HELM_ARGS \
--wait

# Deploy custom dashboards as ConfigMaps.
# Delete conflicting dashboards from Helm chart that don't use info() function.
echo ""
echo "Deploying custom Grafana dashboards..."
echo " - Removing default Helm chart dashboards..."
$KUBECTL delete configmap grafana-dashboard-apm-dashboard --namespace "$NAMESPACE" --ignore-not-found
$KUBECTL delete configmap grafana-dashboard-postgresql-dashboard --namespace "$NAMESPACE" --ignore-not-found
$KUBECTL delete configmap grafana-dashboard-opentelemetry-collector --namespace "$NAMESPACE" --ignore-not-found

echo " - APM Dashboard"
$KUBECTL create configmap apm-dashboard \
--from-file=apm-dashboard.json="$REPO_ROOT/src/grafana/provisioning/dashboards/demo/apm-dashboard.json" \
--namespace "$NAMESPACE" \
--dry-run=client -o yaml | $KUBECTL apply -f -
$KUBECTL label configmap apm-dashboard grafana_dashboard=1 --namespace "$NAMESPACE" --overwrite

echo " - PostgreSQL Dashboard"
$KUBECTL create configmap postgresql-dashboard \
--from-file=postgresql-dashboard.json="$REPO_ROOT/src/grafana/provisioning/dashboards/demo/postgresql-dashboard.json" \
--namespace "$NAMESPACE" \
--dry-run=client -o yaml | $KUBECTL apply -f -
$KUBECTL label configmap postgresql-dashboard grafana_dashboard=1 --namespace "$NAMESPACE" --overwrite

echo " - OpenTelemetry Collector Dashboard"
$KUBECTL create configmap otel-collector-dashboard \
--from-file=opentelemetry-collector.json="$REPO_ROOT/src/grafana/provisioning/dashboards/demo/opentelemetry-collector.json" \
--namespace "$NAMESPACE" \
--dry-run=client -o yaml | $KUBECTL apply -f -
$KUBECTL label configmap otel-collector-dashboard grafana_dashboard=1 --namespace "$NAMESPACE" --overwrite

# Restart Grafana to pick up the new dashboards
echo ""
echo "Restarting Grafana to load dashboards..."
$KUBECTL rollout restart deployment/grafana --namespace "$NAMESPACE" 2>/dev/null || \
$KUBECTL rollout restart deployment/"$RELEASE_NAME"-grafana --namespace "$NAMESPACE" 2>/dev/null || \
echo " (Could not restart Grafana - dashboards will load on next restart)"

echo ""
echo "=== Deployment complete ==="
echo ""
echo "Access the demo:"
echo " kubectl --context $KUBE_CONTEXT port-forward svc/frontend-proxy 8080:8080 -n $NAMESPACE"
echo " Open http://localhost:8080"
echo ""
echo "Access Grafana:"
echo " kubectl --context $KUBE_CONTEXT port-forward svc/grafana 3000:80 -n $NAMESPACE"
echo " Open http://localhost:3000 (admin/admin)"
18 changes: 18 additions & 0 deletions kubernetes/kind-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# Copyright The OpenTelemetry Authors
# SPDX-License-Identifier: Apache-2.0

# Kind cluster configuration for OpenTelemetry Demo
# Creates a cluster with port mapping for the frontend proxy
#
# Usage:
# kind create cluster --config kubernetes/kind-config.yaml --name otel-demo
#
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
# Frontend proxy (main entry point) - exposed via NodePort
- containerPort: 30080
hostPort: 8080
protocol: TCP
25 changes: 25 additions & 0 deletions kubernetes/values-kind.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# Copyright The OpenTelemetry Authors
# SPDX-License-Identifier: Apache-2.0

# Additional Helm values for Kind deployment
# Use with: -f values.yaml -f values-kind.yaml
#
# This configures the frontend-proxy service as NodePort for Kind access
# and increases memory limits for services that need more than the defaults

components:
frontend-proxy:
service:
type: NodePort
nodePort: 30080
# Increase memory limits for services that OOMKill with defaults
product-catalog:
resources:
limits:
memory: 100Mi
flagd:
resources:
limits:
memory: 500Mi
# Disable flagd-ui sidecar - it OOMKills even with 1Gi limit
sidecarContainers: []
Loading
Loading