What steps did you take:
The race condition occurs during a rolling upgrade, node drain, or eviction, but can be deterministically reproduced with the following steps:
- Deploy kapp-controller to a cluster.
- Verify the APIService routing works and capture the valid CA bundle:
kubectl get apiservice v1alpha1.data.packaging.carvel.dev -o jsonpath='{.spec.caBundle}'
- Simulate a terminating pod corrupting the routing by patching the APIService with invalid dummy data:
kubectl patch apiservice v1alpha1.data.packaging.carvel.dev --type=merge -p '{"spec":{"caBundle":"random-data"}}'
- Check the APIService again.
What happened:
The APIService caBundle remains broken indefinitely. kapp-controller currently executes a one-time patch of the v1alpha1.data.packaging.carvel.dev APIService object during pod initialization. It does not detect configuration drift.
What did you expect:
kapp-controller should continuously monitor its registered APIService and act as the active source of truth. If the caBundle drifts or is overwritten by an old pod, it should automatically reconcile and restore the correct active certificate without requiring a manual pod restart.
Anything else you would like to add:
[Additional information that will assist in solving the issue.]
Environment:
- kapp Controller version (execute
kubectl get deployment -n kapp-controller kapp-controller -o yaml and the annotation is kbld.k14s.io/images):
- Kubernetes version - 1.32.3
Vote on this request
This is an invitation to the community to vote on issues, to help us prioritize our backlog. Use the "smiley face" up to the right of this comment to vote.
👍 "I would like to see this addressed as soon as possible"
👎 "There are other more important things to focus on right now"
We are also happy to receive and review Pull Requests if you want to help working on this issue.
What steps did you take:
The race condition occurs during a rolling upgrade, node drain, or eviction, but can be deterministically reproduced with the following steps:
kubectl get apiservice v1alpha1.data.packaging.carvel.dev -o jsonpath='{.spec.caBundle}'
kubectl patch apiservice v1alpha1.data.packaging.carvel.dev --type=merge -p '{"spec":{"caBundle":"random-data"}}'
What happened:
The APIService caBundle remains broken indefinitely. kapp-controller currently executes a one-time patch of the v1alpha1.data.packaging.carvel.dev APIService object during pod initialization. It does not detect configuration drift.
What did you expect:
kapp-controller should continuously monitor its registered APIService and act as the active source of truth. If the caBundle drifts or is overwritten by an old pod, it should automatically reconcile and restore the correct active certificate without requiring a manual pod restart.
Anything else you would like to add:
[Additional information that will assist in solving the issue.]
Environment:
kubectl get deployment -n kapp-controller kapp-controller -o yamland the annotation iskbld.k14s.io/images):Vote on this request
This is an invitation to the community to vote on issues, to help us prioritize our backlog. Use the "smiley face" up to the right of this comment to vote.
👍 "I would like to see this addressed as soon as possible"
👎 "There are other more important things to focus on right now"
We are also happy to receive and review Pull Requests if you want to help working on this issue.