-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature request: Support more Workload kinds: argo rollouts? #397
Comments
Thanks for the feature request. This seems like a high value / low cost feature. We'll work on getting it into our upcoming iterations. |
@enocom Should I open a second issue for the |
If argo rollout support is added, how will it interact with the Rollout Strategy? The api doc says it will follow the Strategy set on that workload, but when testing the operator I found out that only the Pods are modified, not the workloads pod template spec, so it seems to be hard to guarantee the rollout strategy of unknown workload controllers that way. (side question: if we set rolloutStrategy to None, only new pods (independently created) will get the new proxy?) |
Let me dig into some of the questions and get back to you. Big picture, argo is widely used and should work seamlessly with the Operator. |
Hi @thomas-riccardi, Thank you for taking the time write. You have raised some interesting questions. I hope these answers help: If we set rolloutStrategy to None, only new pods (independently created) will get the new proxy? Yes. I wrote a longer explanation of rolloutStrategy below. Are any additional workloads supported? Not yet. The operator only supports these built-in workload types: Pod, ReplicaSet, Deployment, DaemonSet, Job, and CronJob. We would consider adding support for additional, widely used workload types like ArgoCD Rollout or KNative Functions. Please let us know what additional workload types the operator should support. Is there a bug in kind: Pod + selector.matchLabels? It is important to remember that once a pod is created, you cannot modify it's PodSpec. When you create an AuthProxyWorkload using kind: Pod + selector.matchLabels, the operator should add the proxy container to pods created after the creation of the AuthProxyWorkload. But the operator cannot add a proxy container to pods that existed before the creation of the AuthProxyWorkload. The operator will add the annotation to pre-existing pods. If you are seeing different behavior when you use Pod + selector.matchLabels, please file a bug. Why modify the pod instead of the workload's PodSpec? Thank you, I have been waiting for someone to ask this question. The operator adds the proxy container to Pods during pod creation, rather than modifying the PodSpec of a workload. We spent quite some time debating between these two ways of adding a container to a pod. Ultimately, we decided that the operator would add the container to the pod, not the PodSpec for a few reasons:
How does rolloutStrategy work? Kubernetes only permits modification of the PodSpec on some of the built-in workload types: Deployment, DaemonSet, and CronJob. Here's what happens when you modify an AuthProxyWorkload when the rolloutStrategy is set to
The other built-in workload types do not allow their PodSpec to be modified, so the operator cannot trigger a rollout when the AuthProxyWorkload resource changes. When rolloutStrategy is set to "None", then the operator will not attempt to trigger a rollout on a Deployment or DaemonSet when the AuthProxyWorkload changes. This is a good choice when you need to carefully orchestrate the replacement of workload pods. |
Thank you @hessjcg for the detailed explanation, I was looking for such documentation and didn't find it. Kind: PodOK, it makes totally sense for pre-existing Pods. I confirm it works as expected: new Pods are patched. Workload support=> It would be useful to us to get the support for argo rollouts workloads. In the meantime I think we will use Pod+selector + Why modify the pod instead of the workload's PodSpec? & rolloutStrategy(For Jobs it makes sens: it's a workload kind that does not handle PodSpec edition; but for the other kinds it could make sense. I wrongly assumed that the operator would not use the workload PodSpec update features (since it didn't patch the containers), but in fact it does, but minimally: via Pod annotations. => everything makes sense for me now! |
Expected Behavior
Support custom Workloads kinds: argo rollouts
Actual Behavior
Steps to Reproduce the Problem
spec.workload.kind: Rollout
kubectl apply
Specifications
more
I also tried with
kind: Pod
&selector.matchLabels
, but it doesn't seem to work: the Pod does get thecloudsql.cloud.google.com/vesta-web: 1,gcr.io/cloud-sql-connectors/cloud-sql-proxy:2.3.0
annotation, but no extra container is added...I also tried with
kind: Deployment
&name
, because our argo Rollout uses a Deployment to define the pod spec, but I get the same result as withkind: Pod
.even more
when using on a standard Deployment, the pod spec is not changed except for the annotation.
I initially assumed the operator would patch the pod spec to inject there the extra container. But it seems to do it late, only on the Pod.
Is there a reason for that? it is not documented why, when it was implemented: #41
(it seems to theoretically increase the load on the operator, and asks High Availability questions)
The text was updated successfully, but these errors were encountered: