-
Notifications
You must be signed in to change notification settings - Fork 357
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory usage increases with subject and threadpool scheduler #582
Comments
If you wait until the program completes, does the memory go down? It looks like the code generates the items faster than they are processed in the thread pool. There is an unbounded queue to schedule the items between the source and the threads in the ThreadSchedulers. So the memory grows up until the source completes and the threads can catch up. |
Yes, I ran a script that processes 1 million source items without the time sleep and the memory is releasing over the time. My question is: I'm using the same setup (subject + thread pool) in a python application and the frequency of source creation and process could be very high. Maybe should I use multiple observable (multiple subjects)? Thanks. @MainRo |
You need to handle backpressure in your application. Unfortunately, there is no built-in solution for this in RxPY. You can try this library: I do not know if @MichaelSchneeberger still maintains it. |
Hi,
I'm using rxpy for an event-driven application and after some stress tests I noticed a strange behaviour on memory usage.
I've created a simple script to reproduce the issue.
I'm using memory-profile library for tracing the memory usage of this script, after 30 seconds of run the result is the following:
The memory is growing linearly.
If I place a time.sleep(0.1) inside the for loop the memory usage is steady.
I don't know if the problem is related to python / rxpy or if I'm using the library in a wrong way.
PS: the memory usage is steady if I get rid of scheduler and everything is executed on the main thread.
The text was updated successfully, but these errors were encountered: