-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Large number of pending pods for GESIS server #2995
Comments
@arnim my feeling says that something is different from after the machine reboot. But I can't see a correlation. We clean the image cache a few times but only on some we have a spike on pull. |
cc @arnim The number of pending pods at GESIS server reduced. In the last 24 hours, we had My impression is that the pending pods are waiting for a image to be available because the image needs to be build on our server, pushed to Docker Hub and downloaded again to our server. The peak of pending pods has a time correlation with the peak of build pods. |
Do you think we need to add an additional limit to BinderHub for the number of pending spawns, to prevent too many pods being created or queued up? |
My hypothesis is that BinderHub receives a launch request and allocates a new Kubernetes pod for the launch. Because the image required for the Kubernetes pod does not exists yet, the pod goes into pending mode. BinderHub adds a building request to the queue. During some periods, the number of new image build is larger than GESIS server capacity and the pending pods start to accumulate. |
I'm still puzzled of why we have big peaks of pending pods. I understand a few (less than 10) pods pending because of network, for example, the image is larger than usual. I don't understand almost 40 pods pending at the same time. I checked
and I did not find any correlation. @manics @sgibson91 do you have any clue of where should I look for a correlation? Thanks! |
Is a particular spike of pods related to the same repository? Assuming your prometheus labels are the same as on Curvenote try this prometheus query for a time window around the spike: |
Thanks @manics.
Yes, there is. A big number of the pending pods were the same repository. My assumption is that someone is giving a lecture / course. 10+ learners access mybinder.org at the same time. Because the server does not have the Docker image cached, all 10+ users are on the pending status until the Docker image is downloaded. I'm closing this as "won't fix". cc @arnim |
@arnim This is my current understanding of the problem:
The short video highlights one pod in this scenario described above: https://github.com/user-attachments/assets/131cda9f-c9c6-4195-b219-0d7a48e217ff |
You're right... easily reproducible:
If the image doesn't exist then you should get I found this issue kubernetes/kubernetes#121435 |
Related to jupyterhub#2995 Related to jupyterhub#3056
I'm closing this in favour of #3056. |
This started around 8:00 am CEST of 5 June 2024.
OVH
GESIS
kubectl get -n gesis pods --sort-by='{.metadata.creationTimestamp}' | grep Terminating
producesThis is because GESIS server is not able to download the Docker images fast enough.
CurveNote
Not effected yet.
The text was updated successfully, but these errors were encountered: