-
Notifications
You must be signed in to change notification settings - Fork 42
Wait until ready returns success immediately #84
Comments
@mattdodge Hey! Thanks a lot for your detailed report. This does seem like a race issue. We do currently rely solely on A possible fix here would be to leverage the "revision" number of the deployment that is inherited down to the replicaset and pods.
But implementing this straight forward requires us to change the configuration syntax, maybe adding @superbrothers WDYT? Did you have any reason to avoid relying on the "revision" number? |
I would be in favor of a If we're going that route we could probably make use of the - put: prod-kube
params:
kubectl: apply -f ymls/my-app-deployment.yml
- put: prod-kube
params:
kubectl: rollout status deployment/my-app --timeout 60s |
I'm sorry for the late reply.
Adding
Yes, I think so that this is right way. However, - put: prod-kube
params:
kubectl: apply -f ymls/my-app-deployment.yml
wait_until_ready: 0
- put: prod-kube
params:
kubectl: rollout status deployment/my-app --timeout 60s
wait_until_ready: 0 |
I will consider to delete |
Is this a BUG REPORT or FEATURE REQUEST?:
What happened:
The put step with a wait_until_ready_selector is returning success immediately. It's almost too fast for its own good!
What you expected to happen:
I expect the wait step to wait until the deployment update is complete.
How to reproduce it (as minimally and precisely as possible):
Have a kubernetes deployment with a normal RollingUpdate strategy. Use this kubernetes-resource to put changes to the deployment like so:
When this step runs I see this in the output:
This returns true immediately despite the fact that the new pod/replicaset hasn't actually spun up yet. It seems like resource is checking the ready status before the new pod is even created. Likely some kind of race condition with Kubernetes.
Environment:
The text was updated successfully, but these errors were encountered: