You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Traceback (most recent call last):
File "/home/MimicTalk/inference/app_mimictalk_2.py", line 169, in infer_once_args
out_name = self.infer_once(inp)
File "/home/MimicTalk/inference/real3d_infer.py", line 187, in infer_once
out_name = self.forward_system(samples, inp)
File "/home/MimicTalk/venv/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/MimicTalk/inference/real3d_infer.py", line 567, in forward_system
self.forward_audio2secc(batch, inp)
File "/home/MimicTalk/venv/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/MimicTalk/inference/real3d_infer.py", line 418, in forward_audio2secc
batch = self.get_driving_motion(batch['id'], batch['exp'], batch['euler'], batch['trans'], batch, inp)
File "/home/MimicTalk/venv/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/MimicTalk/inference/real3d_infer.py", line 439, in get_driving_motion
batch['drv_secc'] = drv_secc_colors.cuda()
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 14.57 GiB. GPU 0 has a total capacity of 15.63 GiB of which 11.55 GiB is free. Including non-PyTorch memory, this process has 4.07 GiB memory in use. Of the allocated memory 2.60 GiB is allocated by PyTorch, and 807.97 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
Inference ERROR: CUDA out of memory. Tried to allocate 14.57 GiB. GPU 0 has a total capacity of 15.63 GiB of which 11.55 GiB is free. Including non-PyTorch memory, this process has 4.07 GiB memory in use. Of the allocated memory 2.60 GiB is allocated by PyTorch, and 807.97 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
Failed to generate
The text was updated successfully, but these errors were encountered:
Traceback (most recent call last):
File "/home/MimicTalk/inference/app_mimictalk_2.py", line 169, in infer_once_args
out_name = self.infer_once(inp)
File "/home/MimicTalk/inference/real3d_infer.py", line 187, in infer_once
out_name = self.forward_system(samples, inp)
File "/home/MimicTalk/venv/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/MimicTalk/inference/real3d_infer.py", line 567, in forward_system
self.forward_audio2secc(batch, inp)
File "/home/MimicTalk/venv/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/MimicTalk/inference/real3d_infer.py", line 418, in forward_audio2secc
batch = self.get_driving_motion(batch['id'], batch['exp'], batch['euler'], batch['trans'], batch, inp)
File "/home/MimicTalk/venv/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/MimicTalk/inference/real3d_infer.py", line 439, in get_driving_motion
batch['drv_secc'] = drv_secc_colors.cuda()
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 14.57 GiB. GPU 0 has a total capacity of 15.63 GiB of which 11.55 GiB is free. Including non-PyTorch memory, this process has 4.07 GiB memory in use. Of the allocated memory 2.60 GiB is allocated by PyTorch, and 807.97 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
Inference ERROR: CUDA out of memory. Tried to allocate 14.57 GiB. GPU 0 has a total capacity of 15.63 GiB of which 11.55 GiB is free. Including non-PyTorch memory, this process has 4.07 GiB memory in use. Of the allocated memory 2.60 GiB is allocated by PyTorch, and 807.97 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
Failed to generate
The text was updated successfully, but these errors were encountered: