Description
I have various Schedule manifests with backup config set with successfulJobsHistoryLimit: 0. This used to cleanup all backup pods, pre-backup pods, etc. associated with a Backup created by a Schedule. However, in recent update, this behavior no longer works consistently. Sometimes successful backup pods are deleted after completion, sometimes they're not.
Additional Context
This feature was working in v2.13.1
Logs
"error": "Operation cannot be fulfilled on backups.k8up.io \"<REDACTED>\": the object has been modified; please apply your changes to the latest version and try again"}
github.com/k8up-io/k8up/v2/operator/job.(*Config).patchConditions
/home/runner/work/k8up/k8up/operator/job/status.go:57
github.com/k8up-io/k8up/v2/operator/job.(*Config).SetConditionTrueWithMessage
/home/runner/work/k8up/k8up/operator/job/status.go:30
github.com/k8up-io/k8up/v2/operator/executor.(*Generic).CleanupOldResources
/home/runner/work/k8up/k8up/operator/executor/generic.go:54
github.com/k8up-io/k8up/v2/operator/backupcontroller.(*BackupExecutor).cleanupOldBackups
/home/runner/work/k8up/k8up/operator/backupcontroller/executor.go:366
github.com/k8up-io/k8up/v2/operator/backupcontroller.(*BackupReconciler).Provision
/home/runner/work/k8up/k8up/operator/backupcontroller/controller.go:54
github.com/k8up-io/k8up/v2/operator/reconciler.(*controller[...]).Reconcile
/home/runner/work/k8up/k8up/operator/reconciler/reconciler.go:58
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.23.3/pkg/internal/controller/controller.go:222
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.23.3/pkg/internal/controller/controller.go:479
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.23.3/pkg/internal/controller/controller.go:438
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func1.1
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.23.3/pkg/internal/controller/controller.go:313
Expected Behavior
k8up operator should cleanup all successful backups if specified in schedule config.
Steps To Reproduce
apiVersion: k8up.io/v1
kind: Schedule
metadata:
name: foo-backup-schedule
namespace: foo
annotations:
prometheus.io/scrape: "false"
labels:
component: backup
spec:
podConfigRef:
name: backup-config
backup:
# Backup every day at random start time
schedule: "@daily-random"
failedJobsHistoryLimit: 1
successfulJobsHistoryLimit: 0
Version of K8up
v2.15.0
Version of Kubernetes
v1.32.10
Distribution of Kubernetes
K3s
Description
I have various
Schedulemanifests withbackupconfig set withsuccessfulJobsHistoryLimit: 0. This used to cleanup all backup pods, pre-backup pods, etc. associated with aBackupcreated by aSchedule. However, in recent update, this behavior no longer works consistently. Sometimes successful backup pods are deleted after completion, sometimes they're not.Additional Context
This feature was working in
v2.13.1Logs
Expected Behavior
k8up operator should cleanup all successful backups if specified in schedule config.
Steps To Reproduce
Version of K8up
v2.15.0
Version of Kubernetes
v1.32.10
Distribution of Kubernetes
K3s