-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Large memory use #30
Comments
Thanks! Sounds like the same numba memory leak I encountered in the past. Can you check if it goes away with |
Hey Michael, thank you for the response. I just checked and method='median' (medfilt brought up an error) and method='mean' do not seem to have a significant memory leak. 500 iterations using method='mean' took about 150 mb of memory (although that memory usage does seem to remain until python is reset). Specifying method='biweight' (or just not setting the method, which I believe would then default to 'biweight') seems to be the cause of the issue. |
Thanks! I have narrowed down the problem to be a bug in numba, which is known and discussed here. This will probably be fixed in the next numba release. In the meantime, I will try to implement a work-around. I will post here again about the progress. Regarding And yes, giving no method defaults to the biweight. |
Mhm, I'm trying to make a simple example to show the leak, but I can't reproduce it. Can you try to run this code on your machine to check if it leaks? Perhaps it happens only with some versions of numba, numpy etc.
Test code which doesn't leak:
|
Great software! Running into a memory issue though I was hoping you could help with. I have a simple for loop with on order ~50-500 iterations and inside each one I run wotan.flatten on a Kepler light curve (long 30 minute cadence, so arrays of ~60,000 elements. The aim is to assess how good different detrending methods are.
I am finding that according to the Activity Monitor a very large amount of memory is being taken up with this, on the order of 10's of GB. It is also making it impossible to CTRL-C cancel out of the python loop.
Even if I just do single instances of wotan.flatten it takes about 100 MB of memory per run and that does not seem to get released until I restart python.
Have you encountered this before? Is there an easy way of clearing up the memory? I have tried Garbage Collection (gc.collect) without luck.
Thanks!
The text was updated successfully, but these errors were encountered: