Skip to content

Nodes need reboot after ipvs->iptables->ebpf to work properly #12476

@Whisper40

Description

@Whisper40

We are trying to migrate from IPVS Mode to BPF mode.
We are using Calico 3.31.4, and the Tigera Operator Helm Chart 3.31.4.
We have updated the value as below to do the migration as specified here : https://docs.tigera.io/calico/latest/operations/ebpf/enabling-ebpf#enable-the-ebpf-data-plane-automatically-for-self-managed-clusters
We are using Ubuntu 24, and Kubernetes 1.34.5.

            installation:
              calicoNetwork:
                linuxDataplane: BPF
                bpfNetworkBootstrap: Enabled
                kubeProxyManagement: Enabled

Expected Behavior

We expect it to work as before

Current Behavior

Calico Operator trigger the calico rollout, and then disable kube-proxy.
After that (one second later the log that specify the update of kube-proxy), we lost all connectivity on nodes.
We lost SSH too.

We need to reboot all nodes of the cluster, and then everything come back. But as you can understand it is not acceptable for production cluster.

Possible Solution

Steps to Reproduce (for bugs)

Kubeproxy : "IPVS" mode
Calico : linuxDataplane: Iptables -> then we do the migration

Context

Your Environment

  • Calico version : 3.31.4
  • Calico dataplane (bpf, nftables, iptables, windows etc.) : Iptables
  • Orchestrator version (e.g. kubernetes, openshift, etc.): 1.34.5
  • Operating System and version: Ubuntu 24
  • Link to your project (optional):

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions