Skip to content

default OmniGen prompt without image inputs fails with 8GB VRAM #35

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
grschneider opened this issue Dec 11, 2024 · 3 comments
Open

Comments

@grschneider
Copy link

ComfyUI Error Report

Error Details

  • Node ID: 1
  • Node Type: ailab_OmniGen
  • Exception Type: torch.OutOfMemoryError
  • Exception Message: Allocation on device

Stack Trace

  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)

  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-OmniGen\ailab_OmniGen.py", line 387, in generation
    raise e

  File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-OmniGen\ailab_OmniGen.py", line 353, in generation
    output = pipe(
             ^^^^^

  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-OmniGen\OmniGen\pipeline.py", line 215, in __call__
    self.model.to(dtype)

  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1340, in to
    return self._apply(convert)
           ^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 900, in _apply
    module._apply(fn)

  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 900, in _apply
    module._apply(fn)

  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 927, in _apply
    param_applied = fn(param)
                    ^^^^^^^^^

  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1326, in convert
    return t.to(
           ^^^^^

System Information

  • ComfyUI Version: v0.3.7-14-g7a7efe8
  • Arguments: ComfyUI\main.py --windows-standalone-build
  • OS: nt
  • Python Version: 3.12.7 (tags/v3.12.7:0b05ead, Oct 1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)]
  • Embedded Python: true
  • PyTorch Version: 2.5.1+cu124

Devices

  • Name: cuda:0 NVIDIA GeForce RTX 4060 Laptop GPU : cudaMallocAsync
    • Type: cuda
    • VRAM Total: 8585216000
    • VRAM Free: 7418675200
    • Torch VRAM Total: 0
    • Torch VRAM Free: 0

Logs

2024-12-11T20:22:43.169731 - [START] Security scan2024-12-11T20:22:43.169731 - 
2024-12-11T20:22:43.879467 - [DONE] Security scan2024-12-11T20:22:43.879467 - 
2024-12-11T20:22:43.989289 - ## ComfyUI-Manager: installing dependencies done.2024-12-11T20:22:43.989289 - 
2024-12-11T20:22:43.989289 - ** ComfyUI startup time:2024-12-11T20:22:43.989289 -  2024-12-11T20:22:43.989289 - 2024-12-11 20:22:43.9892892024-12-11T20:22:43.989289 - 
2024-12-11T20:22:44.019291 - ** Platform:2024-12-11T20:22:44.019291 -  2024-12-11T20:22:44.019291 - Windows2024-12-11T20:22:44.019291 - 
2024-12-11T20:22:44.019291 - ** Python version:2024-12-11T20:22:44.019291 -  2024-12-11T20:22:44.019291 - 3.12.7 (tags/v3.12.7:0b05ead, Oct  1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)]2024-12-11T20:22:44.019291 - 
2024-12-11T20:22:44.019291 - ** Python executable:2024-12-11T20:22:44.019291 -  2024-12-11T20:22:44.019291 - C:\ComfyUI_windows_portable\python_embeded\python.exe2024-12-11T20:22:44.019291 - 
2024-12-11T20:22:44.019291 - ** ComfyUI Path:2024-12-11T20:22:44.019291 -  2024-12-11T20:22:44.019291 - C:\ComfyUI_windows_portable\ComfyUI2024-12-11T20:22:44.019291 - 
2024-12-11T20:22:44.019291 - ** Log path:2024-12-11T20:22:44.019291 -  2024-12-11T20:22:44.019291 - C:\ComfyUI_windows_portable\comfyui.log2024-12-11T20:22:44.019291 - 
2024-12-11T20:22:44.799278 - 
Prestartup times for custom nodes:2024-12-11T20:22:44.799278 - 
2024-12-11T20:22:44.799278 -    1.6 seconds:2024-12-11T20:22:44.799278 -  2024-12-11T20:22:44.799278 - C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager2024-12-11T20:22:44.799278 - 
2024-12-11T20:22:44.799278 - 
2024-12-11T20:22:47.949411 - Total VRAM 8188 MB, total RAM 16104 MB
2024-12-11T20:22:47.949411 - pytorch version: 2.5.1+cu124
2024-12-11T20:22:47.949411 - Set vram state to: NORMAL_VRAM
2024-12-11T20:22:47.949411 - Device: cuda:0 NVIDIA GeForce RTX 4060 Laptop GPU : cudaMallocAsync
2024-12-11T20:22:48.914182 - Using pytorch cross attention
2024-12-11T20:22:50.199241 - [Prompt Server] web root: C:\ComfyUI_windows_portable\ComfyUI\web
2024-12-11T20:22:50.624636 - ### Loading: ComfyUI-Manager (V2.55.4)2024-12-11T20:22:50.624636 - 
2024-12-11T20:22:50.709293 - ### ComfyUI Revision: 2904 [7a7efe84] *DETACHED | Released on '2024-12-11'2024-12-11T20:22:50.709293 - 
2024-12-11T20:22:50.869570 - 
Import times for custom nodes:
2024-12-11T20:22:50.869570 -    0.0 seconds: C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-OmniGen
2024-12-11T20:22:50.869570 -    0.0 seconds: C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py
2024-12-11T20:22:50.869570 -    0.2 seconds: C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager
2024-12-11T20:22:50.869570 - 
2024-12-11T20:22:50.879340 - Starting server

2024-12-11T20:22:50.879340 - To see the GUI go to: http://127.0.0.1:8188
2024-12-11T20:22:50.953508 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json2024-12-11T20:22:50.959118 - 
2024-12-11T20:22:50.969330 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json2024-12-11T20:22:50.969330 - 
2024-12-11T20:22:51.009192 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json2024-12-11T20:22:51.009192 - 
2024-12-11T20:22:51.022378 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json2024-12-11T20:22:51.022378 - 
2024-12-11T20:22:51.039404 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json2024-12-11T20:22:51.039404 - 
2024-12-11T20:22:51.561406 - FETCH DATA from: C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json2024-12-11T20:22:51.561406 - 2024-12-11T20:22:51.569409 -  [DONE]2024-12-11T20:22:51.569409 - 
2024-12-11T20:23:00.956580 - got prompt
2024-12-11T20:23:00.959872 - OmniGen code already exists2024-12-11T20:23:00.959872 - 
2024-12-11T20:23:00.959872 - OmniGen models verified successfully2024-12-11T20:23:00.959872 - 
2024-12-11T20:23:01.813152 - Auto selecting FP8 (Available VRAM: 8.0GB)2024-12-11T20:23:01.813152 - 
2024-12-11T20:23:01.814149 - Current model instance: None2024-12-11T20:23:01.814149 - 
2024-12-11T20:23:01.814149 - Current model precision: None2024-12-11T20:23:01.814149 - 
2024-12-11T20:25:56.789717 - Loading safetensors2024-12-11T20:25:56.820813 - 
2024-12-11T20:27:13.591399 - Warning: Error moving pipeline to device: Allocation on device , using original pipeline2024-12-11T20:27:13.595650 - 
2024-12-11T20:27:13.605838 - VRAM usage after pipeline creation: 7361.87MB2024-12-11T20:27:13.605838 - 
2024-12-11T20:27:13.606630 - Processing with prompt: Create an image of a 20-year-old woman looking directly at the viewer, with a neutral or friendly expression.2024-12-11T20:27:13.606630 - 
2024-12-11T20:27:13.607472 - Model will be kept during generation2024-12-11T20:27:13.607472 - 
2024-12-11T20:27:13.846827 - Error during generation: Allocation on device 2024-12-11T20:27:13.847013 - 
2024-12-11T20:27:13.883267 - !!! Exception during processing !!! Allocation on device 
2024-12-11T20:27:14.050390 - Traceback (most recent call last):
  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-OmniGen\ailab_OmniGen.py", line 387, in generation
    raise e
  File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-OmniGen\ailab_OmniGen.py", line 353, in generation
    output = pipe(
             ^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-OmniGen\OmniGen\pipeline.py", line 215, in __call__
    self.model.to(dtype)
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1340, in to
    return self._apply(convert)
           ^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 900, in _apply
    module._apply(fn)
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 900, in _apply
    module._apply(fn)
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 927, in _apply
    param_applied = fn(param)
                    ^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1326, in convert
    return t.to(
           ^^^^^
torch.OutOfMemoryError: Allocation on device 

2024-12-11T20:27:14.053841 - Got an OOM, unloading all loaded models.
2024-12-11T20:27:14.084267 - Prompt executed in 253.13 seconds

Attached Workflow

Please make sure that workflow does not contain any sensitive information such as API keys or passwords.

{"last_node_id":2,"last_link_id":1,"nodes":[{"id":2,"type":"SaveImage","pos":[1215.8509521484375,-279.6911926269531],"size":[315,58],"flags":{},"order":1,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":1}],"outputs":[],"properties":{},"widgets_values":["ComfyUI"]},{"id":1,"type":"ailab_OmniGen","pos":[736.5493774414062,-352.0583801269531],"size":[400,428],"flags":{},"order":0,"mode":0,"inputs":[{"name":"image_1","type":"IMAGE","link":null,"shape":7},{"name":"image_2","type":"IMAGE","link":null,"shape":7},{"name":"image_3","type":"IMAGE","link":null,"shape":7}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[1],"slot_index":0}],"properties":{"Node name for S&R":"ailab_OmniGen"},"widgets_values":["20yo woman looking at viewer","","Auto","Balanced",3.5,1.8,50,true,false,512,512,14284504594426,"randomize",1024]}],"links":[[1,1,0,2,0,"IMAGE"]],"groups":[],"config":{},"extra":{"ds":{"scale":1.0152559799477197,"offset":[-446.64230447547857,504.8965655982001]}},"version":0.4}

Additional Context

I read somewhere the FP8 model should work fine with 8GB VRAM. For me it does not.
Any recommendations?

@Eldar-H-1
Copy link

I have the same problem.

@MajorTom100
Copy link

MajorTom100 commented Jan 6, 2025

same problem here. And I have 8 GB VRAM.

@Willian7004
Copy link

It works on the latest version. It takes 4.1G VRAM when using fp8 model and Memory Priority mode.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants