-
Notifications
You must be signed in to change notification settings - Fork 230
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Jobs not running #194
Comments
Hi @arashmortazavi2 I face a very similar issue with bottleneck, I had implemented rate limiter for third part calls, I face this issue when ever the TPS is reached its threshold, the queue is getting blocked and not processing any other jobs until I do a manual restart to my instance, Did you fixed this issue..! Below are my configuration. And I haven't set any expiration in the scheduler options. { Thanks. |
I have encountered a similar bottleneck problem and figured out the root cause. In my case, I was using the bottleneck to rate limit API calls to a third-party API. My implementation has two problems. First, my nodejs application runs on the same device in multiple instances with each having its own bottleneck instance. Because of this, each instance can make calls simultaneously and exceeds the rate limit ( This can be fixed by making the bottleneck distributed using redis ). When I get a rate limit error, I retry using the same wrapped function with await. So, a job is created in a currently running job and pushed to the queue. The current job can't execute because it is waiting for the retry job to complete. The retry job can't complete because the current job is still running. This creates a circular dependency and a DEADLOCK. I resolve the deadlock by calling the function directly instead of the wrapped function ( this time it is not queued ). |
Hi all, We are also facing the same issue with our bottleneck instance with redis as well. Does anyone has any idea on how to fix this issue? Response from @vikram741 doesn't help as that is not the case in our implementation. We are doing all the things by the documentation and still this is happening. We are using Redis to have a distributed limiter, and we are not even retrying a failed task. |
I'm using bottleneck (with clustering enabled) on Typescript inside an AWS Lamba function. The library works fine for some time until at some point I start that jobs stop running. I see that at that point the
await requestLimiter.schedule()
call is blocking until the lambda times out. From then on continues to happen for incoming jobs (they get queued and don't run) for hours until I manually flush the AWS Redis instance.Here is how I create the bottleneck instance:
This is what I see in the logs for jobs that get queued but don't execute:
No other logs for job1 afterward.
What can be the issue here? Any pointers and suggestions would be appreciated. Please let me know if you need any more info. Thanks!
The text was updated successfully, but these errors were encountered: