You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are scenarios where a workload may need to be restarted in order for configuration changes to be instantiated.
For example, when there is a configuration change for the workload applied to Spring Config Server, the app would need to restart to consume and apply the new config itself.
To get said config changes to get picked up currently someone must delete and recreate the workload, wait 'till the pod is replaced for some reason or 'till it gets auto-scaled to name a few.
Since time is always of the essence, providing controls to enabling restart at will in a straightforward way will be valuable.
Proposed solution (TBD)
Given <Some Condition>When <Something Happens>Then <This other thing should happen?>
Example
<Code snippets that illustrate the when/then blocks>
Describe alternatives to be considered
delete and recreate the workload
provide a restart command: tanzu apps workload restart workload-name (??possible to provide flags to control rollout strategy, all at once, sequential, batch size, etc...)
kubectl rollout restart deploy/XYZ uses a kubectl.kubernetes.io/restartedAt date in sp.c.template.metadata.annotations to trigger a rolling restart. We could certainly enable tanzu apps to do the same across both deployments and Knative Services (it would work the same for both).
have a local agent which detects the updated ConfigMap and then locally restarts the container. Unfortunately, that could lead to an outage, as the “restart for config update” would be spread across the 60s window for kubelet updates to ConfigMaps, which might be a bit fast.
create a new ConfigMap and then explicitly update the Deployment to reference the new ConfigMap, which would generate a new application rollout, which would leverage all of the existing “make a rollout safe” settings
We’d need to figure out how to represent the latter in a GitOps model, assuming that they are also interested in using GitOps to manage their higher-level application delivery.
Technically though it is leaning on k8s quite a lot.
And the fact that it will restart things to get back to a "desired" state.
Plus force killing processes can potentially cause bad things to happen.
So, we'd need to consider the implementation of that command a little and make sure
it is really what we want to do
we could prompt for confirmation from user with an info/warning about potential negative outcomes if run
The text was updated successfully, but these errors were encountered:
WIP Issue
Description of problem
There are scenarios where a workload may need to be restarted in order for configuration changes to be instantiated.
For example, when there is a configuration change for the workload applied to
Spring Config Server
, the app would need to restart to consume and apply the new config itself.To get said config changes to get picked up currently someone must delete and recreate the workload, wait 'till the pod is replaced for some reason or 'till it gets auto-scaled to name a few.
Since time is always of the essence, providing controls to enabling restart at will in a straightforward way will be valuable.
Proposed solution (TBD)
Example
<Code snippets that illustrate the when/then blocks>
Describe alternatives to be considered
tanzu apps workload restart workload-name
(??possible to provide flags to control rollout strategy, all at once, sequential, batch size, etc...)kubectl rollout restart deploy/XYZ
uses akubectl.kubernetes.io/restartedAt
date insp.c.template.metadata.annotations
to trigger a rolling restart. We could certainly enable tanzu apps to do the same across both deployments and Knative Services (it would work the same for both).Additional context
Comment/concern to address from @paulcwarren:
The text was updated successfully, but these errors were encountered: