-
Notifications
You must be signed in to change notification settings - Fork 177
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cudaMallocAsync does not yet support checkPoolLiveAllocations. If you need it, please file an issue describing your use case. #132
Comments
It works setting torch.compile to False but is not ideal maybe my torch is too new and it manage mem allocation differently |
The "Profiler function" warning sohuld be 100% harmless. I am not sure where forking may be used during inference. Were you able to figure out the cause of the issue? |
Not really I think is because my torch setup is newer than whisperspeech
expects and handles the memory allocation differently, I can run very
slowly on CPU by disabling torch_compile on the pipeline arguments, but is
not ideal I will try other pytorch environments but this is definitely not
good since other things I need to use needs the current version of pythorch
I am using.
The first message And wrote has a warning to do with Jax and cpython also
about multy threading so that might be also . Another thing it also looks
like it has trouble on Windows too.
…On Tue, Apr 16, 2024, 2:31 PM Jakub Piotr Cłapa ***@***.***> wrote:
The "Profiler function" warning sohuld be 100% harmless. I am not sure
where forking may be used during inference.
Were you able to figure out the cause of the issue?
—
Reply to this email directly, view it on GitHub
<#132 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AFBUFQTFPK6L2VNS4F6NL4DY5UR4DAVCNFSM6AAAAABGHLSSZ2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANJZGEYDEOBWGU>
.
You are receiving this because you modified the open/close state.Message
ID: ***@***.***>
|
just in case this is the log the first attempt is successful because I set torch_compile to false but the second time set to true the error is to do with dynamo and triton don't know why here is the full log Starting server To see the GUI go to: http://127.0.0.1:8188 <IPython.lib.display.Audio object> Moviepy - Done !
|
hi i'm getting the same issue. Any fix for this? |
I always get this error
WSL Ubuntu CUDA 12.1
The text was updated successfully, but these errors were encountered: