You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have a service which is accessed through the Istio Ingress (version 1.19.6). The ingress seems to work correctly, and the service works as intended.
Recently we added a pingdom check on the service, and we noticed that around 5% of the checks failed.
After running some test we notice that a small portion of requests fail, returning "Empty response from server". We increased the minimum number of pods of the Istio Ingress from 2 to 3, and now requests are properly served 100% of the time.
Since having an additional replica solved the issue, I assumed it had to do with a CPU bottleneck, but after checking the metrics CPU doesn't seem to be the issue, as the pod's CPU usage was below the CPU requested.
There is a significant difference in packets and bandwidth in each pod before and after adding the additional pod, which could be why there were requests not being replied. If this was the case, is this a limit on istio? The node where the pod is running?
I'd like to tune the hpa to make sure the service works even with a higher load, but I can't do so if I can't understand where the bottleneck in the setup.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello,
We have a service which is accessed through the Istio Ingress (version 1.19.6). The ingress seems to work correctly, and the service works as intended.
Recently we added a pingdom check on the service, and we noticed that around 5% of the checks failed.
After running some test we notice that a small portion of requests fail, returning "Empty response from server". We increased the minimum number of pods of the Istio Ingress from 2 to 3, and now requests are properly served 100% of the time.
Since having an additional replica solved the issue, I assumed it had to do with a CPU bottleneck, but after checking the metrics CPU doesn't seem to be the issue, as the pod's CPU usage was below the CPU requested.
There is a significant difference in packets and bandwidth in each pod before and after adding the additional pod, which could be why there were requests not being replied. If this was the case, is this a limit on istio? The node where the pod is running?
I'd like to tune the hpa to make sure the service works even with a higher load, but I can't do so if I can't understand where the bottleneck in the setup.
Beta Was this translation helpful? Give feedback.
All reactions