You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are seeing some issue with perhaps the Nexus chart itself in which we have an existing replica of 3 running while trying to perform an upgrade. The upgrade will start terminating one of the replicas but once it terminates one of them it starts spinning up the new pod with the latest version before the other 2 old pods have scaled down and then complains with following error:
nxrm-app java.lang.IllegalStateException: unable to perform clustered deployment for node 918cbdc0-4908-4064-8da9-ac480d579eb3 , found inconsistencies : │
│ nxrm-app There are other Nexus Repository instance(s) that are currently running in this HA cluster with a different version. Please stop these instances.
This is forcing us adding to our deploy job an extra kubectl scale --replicas 0 statefulset/nexus-nxrm-ha -n $NAMESPACE command and wait period to allow all old pods to scale down before it start the upgrade and scaling up the new pods. Are we missing some values setting for ha in which, if it is an upgrade, to allow proper scaling down before it commences with creating new pods?
Thanks.
The text was updated successfully, but these errors were encountered:
We are seeing some issue with perhaps the Nexus chart itself in which we have an existing replica of 3 running while trying to perform an upgrade. The upgrade will start terminating one of the replicas but once it terminates one of them it starts spinning up the new pod with the latest version before the other 2 old pods have scaled down and then complains with following error:
This is forcing us adding to our deploy job an extra
kubectl scale --replicas 0 statefulset/nexus-nxrm-ha -n $NAMESPACE
command and wait period to allow all old pods to scale down before it start the upgrade and scaling up the new pods. Are we missing some values setting for ha in which, if it is an upgrade, to allow proper scaling down before it commences with creating new pods?Thanks.
The text was updated successfully, but these errors were encountered: