Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Goldpinger is too sensitive to autoscaler activity #87

Open
dharmab opened this issue May 26, 2020 · 5 comments
Open

Goldpinger is too sensitive to autoscaler activity #87

dharmab opened this issue May 26, 2020 · 5 comments

Comments

@dharmab
Copy link

dharmab commented May 26, 2020

Describe the bug

The order of operations for removing a node in Kubernetes is:

  1. Cluster autoscaler removes VM from the underlying cloud provider
  2. Node object enters NotReady state since the Kubelet process stops reporting
  3. Cloud Controller eventually notices that the VM is gone and removes the Node object

The time between 2 and 3 can be quite long (many minutes in some clouds). Goldpinger continuously tries to reach the node during this time causing spikes in Goldpinger metrics.

To Reproduce
Steps to reproduce the behavior:

  1. Overscale a cluster in a cloud with long deletion times such as Azure
  2. Allow cluster to scale down
  3. Observe peer failures in Goldpinger metrics and logs

Expected behavior

Goldpinger should provide a mechanism to filter out NotReady nodes from metric queries to focus on Nodes which are expected to be functioning normally.

Screenshots

Here's an example showing Goldpinger error rates spike as a cluster scaled down over a period of hours.

Screen Shot 2020-05-26 at 4 50 07 PM

Environment (please complete the following information):

  • Operating System and Version: N/A
  • Browser [e.g. Firefox, Safari] (if applicable): N/A

Additional context
Add any other context about the problem here.

@dharmab
Copy link
Author

dharmab commented May 26, 2020

Here's another case- a cluster had an autoscaler mass scale in event, which caused goldpinger to incorrectly report 23% of the network was failing.

image

image

@dntosas
Copy link

dntosas commented Jul 27, 2020

Hello people!

We have the same issue ^^ but filtering out a node because of status NotReady is not correct as there are cases where node instead of being marked as NotReady because of scale-down, could be labeled NotReady as legitimately corrupted.

So we better focus on filtering out nodes on different label/attribute like SchedulingDisabled or smth. What do you think?

@danV111
Copy link

danV111 commented Oct 26, 2021

Hello!

Any updates on this one?

@rbtr
Copy link
Contributor

rbtr commented Nov 9, 2021

#107 should help - after the node eviction timeout, pods go to terminating state and goldpinger will stop trying to reach them

@danV111
Copy link

danV111 commented Nov 11, 2021

@rbtr thank you. I will give it a go and come back with some feedback in the next days.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants