Skip to content

fix(hlf-ordnode): mount emptyDir at /etc/hyperledger/fabric for Fabric 3.1.2+ upgrade#317

Open
dviejokfs wants to merge 1 commit intomainfrom
fix/ordnode-fabric-3.1.2-emptydir-mount
Open

fix(hlf-ordnode): mount emptyDir at /etc/hyperledger/fabric for Fabric 3.1.2+ upgrade#317
dviejokfs wants to merge 1 commit intomainfrom
fix/ordnode-fabric-3.1.2-emptydir-mount

Conversation

@dviejokfs
Copy link
Copy Markdown
Contributor

Summary

Enables upgrading the orderer image to Hyperledger Fabric 3.1.2 and onwards by mounting an emptyDir over /etc/hyperledger/fabric in the hlf-ordnode chart, overriding the image's declared VOLUME and bypassing a container-runtime regression.

What happened

Fabric 3.1.2 (hyperledger/fabric#5236, commit 77db242a) restructured the contents of the image's declared VOLUME /etc/hyperledger/fabric. On several container runtimes — most notably containerd, which is the default on the majority of managed Kubernetes distributions — this triggers a "runtime copy of volume" step at pod start that fails against the new 3.1.2 layout. The result: orderer pods crashloop the moment you bump the image past 3.1.1.

Symptom seen in the field:

  • Orderer pods stuck in CrashLoopBackOff immediately after the image bump.
  • Errors during the runtime's pre-start volume materialization (before any orderer process logs).
  • Reverting the image tag to 3.1.1 resolves it, confirming the issue is the 3.1.2 image layout interacting with the runtime's anonymous-volume copy behavior.

The fix

Mount an empty emptyDir over /etc/hyperledger/fabric in the orderer container. This overrides the image's declared VOLUME, so the runtime never performs the broken copy step. The orderer continues to read its real configuration from /var/hyperledger/fabric/config (unchanged), so behavior is identical for existing 1.4.x / 2.x / 3.0.x / 3.1.0 / 3.1.1 deployments — this is purely a forward-compat unblocker.

Changes

  • charts/hlf-ordnode/templates/deployment.yaml — add fabric-cfg emptyDir volume and mount it at /etc/hyperledger/fabric in the orderer container.
  • charts/hlf-ordnode/Chart.yaml — bump chart version 1.4.0 -> 1.4.1, bump appVersion 1.4.3 -> 3.1.2.

Why this is safe for existing deployments

  • The new mount targets /etc/hyperledger/fabric, which the chart never wrote to before — there is no overlap with the existing /var/hyperledger/fabric/config ConfigMap mount that the orderer actually reads.
  • emptyDir is ephemeral and reset on each pod start, which is exactly what we want for a path the orderer doesn't consume.
  • No values changes, no API surface changes — purely additive in the rendered Deployment.

Test plan

  • Render the chart locally (helm template) and confirm the fabric-cfg volume + mount appear on the orderer container.
  • Deploy on a cluster with image.tag=3.1.2 and confirm the orderer pod starts cleanly (no crashloop, /healthz returns 200).
  • Confirm an existing orderer at 3.1.1 upgrades to 3.1.2 in place via the operator without data loss.
  • Regression check: deploy image.tag=3.1.1 (and 2.5.x) with this chart and confirm no behavior change.
  • Verify kubectl exec deploy/<orderer> -- ls /etc/hyperledger/fabric returns empty (the emptyDir).

…c 3.1.2+ compatibility

Fabric 3.1.2 (hyperledger/fabric#5236, commit 77db242a) restructured the
contents of the image's declared `VOLUME /etc/hyperledger/fabric`. On many
container runtimes (notably containerd) this triggers a "runtime copy of
volume" step at pod start that fails against the new 3.1.2 layout, causing
orderer pods to crashloop after upgrading the orderer image past 3.1.1.

Mounting an empty `emptyDir` over `/etc/hyperledger/fabric` overrides the
image's declared VOLUME, so the runtime never performs the broken copy.
The orderer continues to read its real configuration from
`/var/hyperledger/fabric/config` (unchanged), so behavior is identical for
existing 1.4.x / 2.x / 3.0.x / 3.1.0 / 3.1.1 deployments.

Bump chart version 1.4.0 -> 1.4.1 and appVersion 1.4.3 -> 3.1.2.

This fix is required to upgrade orderers to Fabric 3.1.2 and onwards.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant