-
Notifications
You must be signed in to change notification settings - Fork 96
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug: Queue length doesn't seem right #218
Comments
Celery prefetches 4 messages for each process. Each worker can have multiple processes and you could be running multiple workers. See: https://docs.celeryq.dev/en/stable/userguide/optimizing.html#prefetch-limits |
Hello, @danihodovic! First of all, thank you for such fantastic software, it is really awesome. We've been using this exporter to monitor our Celery deployment and everything works like a charm except for the Queue Length, I will attach an image of how the dashboard looks. I find it difficult to believe that the queue length is always zero and maybe it's related to this issue by @michaelschem. We're using Django + Celery + Redis with the events on and it works perfectly (hence the other data being accurate), it is only that specific queue length metric that is always zero that seems suspicious (to me) given the other values. We're using only one queue named "celery" (which is the default value I assume) and the Thanks in advance! |
Hi Humberto, Can you confirm the queue length using |
Hi, @danihodovic! Thank you for your response. The response I'm getting from
And from the
Maybe this StackOverflow question is what's happening? Basically, the queue processing is so fast that it's always zero? That would be a happy problem :) Thanks in advance :) |
That's my experience in one of the larger projects we use Celery in. We scale between 10-20 workers and the queue length almost always stays at 0. @adinhodovic has more context |
I'm doing a bit of load testing and I'm seeing roughly correct numbers for sent, but the tasks don't reflect in the queue length all the time. The tasks are short (10s) so maybe this is a queue length polling time issue?
The text was updated successfully, but these errors were encountered: