Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

free_upper_bound + pytorch_used_bytes[device] <= device_total INTERNAL ASSERT FAILED at "C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\c10\\cuda\\CUDAMallocAsyncAllocator.cpp":540, please report a bug to PyTorch. #161

Open
maiqunshan opened this issue Dec 24, 2024 · 0 comments

Comments

@maiqunshan
Copy link

free_upper_bound + pytorch_used_bytes[device] <= device_total INTERNAL ASSERT FAILED at "C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\c10\cuda\CUDAMallocAsyncAllocator.cpp":540, please report a bug to PyTorch.
ComfyUI Error Report

Error Details

  • Node ID: 3
  • Node Type: XlabsSampler
  • Exception Type: RuntimeError
  • Exception Message: free_upper_bound + pytorch_used_bytes[device] <= device_total INTERNAL ASSERT FAILED at "C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\c10\cuda\CUDAMallocAsyncAllocator.cpp":540, please report a bug to PyTorch.

Stack Trace

  File "C:\ComfyUI-aki-v1.3\execution.py", line 328, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "C:\ComfyUI-aki-v1.3\execution.py", line 203, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "C:\ComfyUI-aki-v1.3\execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)

  File "C:\ComfyUI-aki-v1.3\execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))

  File "C:\ComfyUI-aki-v1.3\custom_nodes\x-flux-comfyui\nodes.py", line 458, in sampling
    x = denoise_controlnet(

  File "C:\ComfyUI-aki-v1.3\custom_nodes\x-flux-comfyui\sampling.py", line 282, in denoise_controlnet
    block_res_samples = container.controlnet(

  File "C:\ComfyUI-aki-v1.3\python\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)

  File "C:\ComfyUI-aki-v1.3\python\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)

  File "C:\ComfyUI-aki-v1.3\custom_nodes\x-flux-comfyui\xflux\src\flux\controlnet.py", line 175, in forward
    controlnet_cond = self.input_hint_block(controlnet_cond)

  File "C:\ComfyUI-aki-v1.3\python\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)

  File "C:\ComfyUI-aki-v1.3\python\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)

  File "C:\ComfyUI-aki-v1.3\python\lib\site-packages\torch\nn\modules\container.py", line 250, in forward
    input = module(input)

  File "C:\ComfyUI-aki-v1.3\python\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)

  File "C:\ComfyUI-aki-v1.3\python\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)

  File "C:\ComfyUI-aki-v1.3\python\lib\site-packages\torch\nn\modules\conv.py", line 554, in forward
    return self._conv_forward(input, self.weight, self.bias)

  File "C:\ComfyUI-aki-v1.3\python\lib\site-packages\torch\nn\modules\conv.py", line 549, in _conv_forward
    return F.conv2d(

System Information

  • ComfyUI Version: v0.3.9
  • Arguments: C:\ComfyUI-aki-v1.3\main.py --auto-launch --preview-method auto --disable-smart-memory --disable-cuda-malloc
  • OS: nt
  • Python Version: 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
  • Embedded Python: false
  • PyTorch Version: 2.5.1+cu124

Devices

  • Name: cuda:0 NVIDIA GeForce RTX 4070 Ti SUPER : cudaMallocAsync
    • Type: cuda
    • VRAM Total: 17170956288
    • VRAM Free: 127304818
    • Torch VRAM Total: 18220056576
    • Torch VRAM Free: 127304818
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant