Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

torch.OutOfMemoryError: CUDA out of memory on 16gb VRAM #49

Open
linhcentrio opened this issue Dec 24, 2024 · 0 comments
Open

torch.OutOfMemoryError: CUDA out of memory on 16gb VRAM #49

linhcentrio opened this issue Dec 24, 2024 · 0 comments

Comments

@linhcentrio
Copy link

Traceback (most recent call last):
File "/home/MimicTalk/inference/app_mimictalk_2.py", line 169, in infer_once_args
out_name = self.infer_once(inp)
File "/home/MimicTalk/inference/real3d_infer.py", line 187, in infer_once
out_name = self.forward_system(samples, inp)
File "/home/MimicTalk/venv/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/MimicTalk/inference/real3d_infer.py", line 567, in forward_system
self.forward_audio2secc(batch, inp)
File "/home/MimicTalk/venv/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/MimicTalk/inference/real3d_infer.py", line 418, in forward_audio2secc
batch = self.get_driving_motion(batch['id'], batch['exp'], batch['euler'], batch['trans'], batch, inp)
File "/home/MimicTalk/venv/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/MimicTalk/inference/real3d_infer.py", line 439, in get_driving_motion
batch['drv_secc'] = drv_secc_colors.cuda()
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 14.57 GiB. GPU 0 has a total capacity of 15.63 GiB of which 11.55 GiB is free. Including non-PyTorch memory, this process has 4.07 GiB memory in use. Of the allocated memory 2.60 GiB is allocated by PyTorch, and 807.97 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
Inference ERROR: CUDA out of memory. Tried to allocate 14.57 GiB. GPU 0 has a total capacity of 15.63 GiB of which 11.55 GiB is free. Including non-PyTorch memory, this process has 4.07 GiB memory in use. Of the allocated memory 2.60 GiB is allocated by PyTorch, and 807.97 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
Failed to generate

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant