You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
(deforum_xflux_env) PS D:\_AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\deforum-x-flux> python.exe run.py
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 3.60it/s]
Traceback (most recent call last):
File "D:\_AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\deforum-x-flux\run.py", line 122, in<module>root.__dict__.update(ModelSetup())
^^^^^^^^^^^^
File "D:\_AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\deforum-x-flux\run.py", line 119, in ModelSetup
model = Model("flux-dev-fp8", quantized=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\_AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\deforum-x-flux\run.py", line 92, in __init__
self.dit, self.ae, self.t5, self.clip = self.get_models(name, 'cuda', offload=False, is_schnell=False, quantized=quantized)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\_AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\deforum-x-flux\run.py", line 95, in get_models
t5 = load_t5(device, max_length=256 if is_schnell else 512)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\_AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\deforum-x-flux\./x-flux\src\flux\util.py", line 315, in load_t5
return HFEmbedder("xlabs-ai/xflux_text_encoders", max_length=max_length, torch_dtype=torch.bfloat16).to(device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\_AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\deforum-x-flux\deforum_xflux_env\Lib\site-packages\torch\nn\modules\module.py", line 1174, in to
return self._apply(convert)
^^^^^^^^^^^^^^^^^^^^
File "D:\_AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\deforum-x-flux\deforum_xflux_env\Lib\site-packages\torch\nn\modules\module.py", line 780, in _apply
module._apply(fn)
File "D:\_AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\deforum-x-flux\deforum_xflux_env\Lib\site-packages\torch\nn\modules\module.py", line 780, in _apply
module._apply(fn)
File "D:\_AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\deforum-x-flux\deforum_xflux_env\Lib\site-packages\torch\nn\modules\module.py", line 805, in _apply
param_applied = fn(param)
^^^^^^^^^
File "D:\_AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\deforum-x-flux\deforum_xflux_env\Lib\site-packages\torch\nn\modules\module.py", line 1160, in convert
return t.to(
^^^^^
File "D:\_AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\deforum-x-flux\deforum_xflux_env\Lib\site-packages\torch\cuda\__init__.py", line 305, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
I tried
import torch
print(torch.cuda.is_available())
but returns false...
Any idea?
The text was updated successfully, but these errors were encountered:
import torch
print(torch.cuda.is_available())
After completing these steps, you should be able to run your script without encountering the "Torch not compiled with CUDA enabled" error.
I can get further on Windows.
I have batch files here to install and run it. #2 (comment)
I do have the issue that it runs super slow on Windows though that I am hoping a dev can help with eventually.
i am stuck at this errors:
I tried
but returns
false
...Any idea?
The text was updated successfully, but these errors were encountered: