You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
switch to matchTrafficWeight: true , at here, old pod at step 1 immediately terminated and 1 new pod created. At this time, due to there are no canary pod alive to serve canary traffic so it was downtime (stable flow is still normal).
How can I prevent this case?
Thanks all
To Reproduce
Expected behavior
When I promote to switch to setCanaryScale: matchTrafficWeight: true, old canary pod keep alive till new canary pod created
Screenshots
Version
v1.6.6
Logs
# Paste the logs from the rollout controller
# Logs for the entire controller:
kubectl logs -n argo-rollouts deployment/argo-rollouts
# Logs for a specific rollout:
kubectl logs -n argo-rollouts deployment/argo-rollouts | grep rollout=<ROLLOUTNAME
Message from the maintainers:
Impacted by this bug? Give it a 👍. We prioritize the issues with the most 👍.
The text was updated successfully, but these errors were encountered:
Checklist:
Describe the bug
Hi team, I got the problem relating switch setCanaryScale between replicas and matchTrafficWeight: true , the below is my steps:
The promote flow as below:
How can I prevent this case?
Thanks all
To Reproduce
Expected behavior
When I promote to switch to
setCanaryScale: matchTrafficWeight: true
, old canary pod keep alive till new canary pod createdScreenshots
Version
v1.6.6
Logs
Message from the maintainers:
Impacted by this bug? Give it a 👍. We prioritize the issues with the most 👍.
The text was updated successfully, but these errors were encountered: