Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Onnxruntime not found or doesn't come with acceleration providers #75

Closed
Sostay opened this issue Oct 15, 2023 · 52 comments
Closed

Onnxruntime not found or doesn't come with acceleration providers #75

Sostay opened this issue Oct 15, 2023 · 52 comments

Comments

@Sostay
Copy link

Sostay commented Oct 15, 2023

What is the problem? It seems that opencv is not running in the normal way. Does anyone know how to solve it?

image
E:\Stable Diffusion\ComfyUI\ComfyUI\custom_nodes\comfyui_controlnet_aux\node_wrappers\dwpose.py:26: UserWarning: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device (super slow)
warnings.warn("Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device (super slow)")

@Fannovel16
Copy link
Owner

It's just a replacement for "DWPose doesn't support CUDA out-of-the block"

@Fannovel16
Copy link
Owner

Changed the warning at 3c8cfd3

@Sostay
Copy link
Author

Sostay commented Oct 15, 2023

It's just a replacement for "DWPose doesn't support CUDA out-of-the block"

E:\Stable Diffusion\ComfyUI\ComfyUI\custom_nodes\comfyui_controlnet_aux\node_wrappers\dwpose.py:26: UserWarning: DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly warnings.warn("DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly")

@Sostay
Copy link
Author

Sostay commented Oct 15, 2023

image

@Fannovel16 Fannovel16 reopened this Oct 15, 2023
@Aria-Victoria
Copy link

Looks CUDA related. Just ran into this myself. Onnx site only has support for 11.8 CUDA listed. New ComfyUI is using 1.21 (I think since that's what its downloading now).

Not sure this will get fixed till Onnx does something on their side.

@Sostay
Copy link
Author

Sostay commented Oct 16, 2023

After installed onnxruntime-gpu 1.16.1——

DWPose: Onnxruntime with acceleration providers detected. Caching sessions (might take around half a minute)...
EP Error D:\a_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1193 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "E:\Stable_Diffusion\ComfyUI\python_embeded\Lib\site-packages\onnxruntime\capi\onnxruntime_providers_tensorrt.dll"
when using ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
2

@Fannovel16
Copy link
Owner

@Sostay "Falling back" is not an error

@Sostay
Copy link
Author

Sostay commented Oct 16, 2023

@Sostay "Falling back" is not an error

After installing onnxruntime (regardless of GPU version or CPU version), there is an error message. It seems that there is no need to install onnxruntime, and then just ignore the fallback?

@Fannovel16
Copy link
Owner

just ignore the fallback?

As I said, "fallback" is not an error, but if it has "Failed to create CUDAExecutionProvider" or "Failed to create ROCMExecutionProvider" then it is

@CapoFortuna
Copy link

CapoFortuna commented Oct 16, 2023

I installed like this
Screenshot 2023-10-16 125946

and i get this error when i open comfy

Screenshot 2023-10-16 130105

I have same problems, dw keep using cpu

I'm on confy portable cu121

@za-wa-n-go
Copy link

same

@kenny-kvibe
Copy link

kenny-kvibe commented Oct 17, 2023

+1
I've installed TensorRT and downgraded torch to use cu118 and also reinstalled onnxruntime-gpu.
InvokeAI still uses cu118, and Comfy also works normally with it.

No errors nor fallbacks:
image

I did this because there's no cu121 listed here nor any of the 12.x versions:
https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements

@illuculent
Copy link

Same here... but:
Windows 11

C:\Users\booty>nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:41:10_Pacific_Daylight_Time_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0

onnx 1.14.1
onnxruntime-gpu 1.16.1

ComfyUI Revision: 1587 [f8caa24b] | Released on '2023-10-17'

... which I thought was supposed to be compatible with ONNXRuntime.

@kenny-kvibe
Copy link

@haqthat
Copy link

haqthat commented Oct 27, 2023

+1 I've installed TensorRT and downgraded torch to use cu118 and also reinstalled onnxruntime-gpu. InvokeAI still uses cu118, and Comfy also works normally with it.

No errors nor fallbacks: image

I did this because there's no cu121 listed here nor any of the 12.x versions: https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements

any tips on how to do this safely with the portable version?

@kenny-kvibe
Copy link

+1 I've installed TensorRT and downgraded torch to use cu118 and also reinstalled onnxruntime-gpu. InvokeAI still uses cu118, and Comfy also works normally with it.
No errors nor fallbacks: image
I did this because there's no cu121 listed here nor any of the 12.x versions: https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements

any tips on how to do this safely with the portable version?

Portable version of? Safe from what?
Please elaborate.

@haqthat
Copy link

haqthat commented Oct 28, 2023 via email

@kenny-kvibe
Copy link

kenny-kvibe commented Oct 28, 2023

Make sure you have everything in the system PATH variable.
Or if you don't want it in the system PATH, create a script to have PATH changed only in it:

Windows (batch)
File: run_comfy.bat

@ECHO off
SETLOCAL

SET "PATH=X:\path\to\missing\files;%PATH%"
CD %~dp0ComfyUI
python main.py

ENDLOCAL
EXIT /B 0

Linux (bash)
File: run_comfy.sh - if you can't run it, add execute permissions with chmod +x run_comfy.sh

#!/usr/bin/env bash
cd `dirname $0`/ComfyUI
PATH="/path/to/missing/files:$PATH" python main.py
return 0

And place this script in the same folder where your ComfyUI folder is.


Do this ^ if it says that some program you installed is missing or not found.

I don't know your exact issue so my answers are about what I think your issue is.
Take some time and look through the terminal when running Comfy,
it'll tell you everything thats wrong and go from there.

@Persite007
Copy link

Persite007 commented Nov 2, 2023

@Fannovel16 like you explain in an other post, I added in comfyui_controlnet_aux/requirements.txt
onnxruntime-gpu
onnxruntime-directml
onnxruntime-openvino

:) Now I have both acceleration, CPUs and GPU run at 100% and fans also...
But still an error at startup with onnxruntime_providers_openvino.dll
I'm not developper and I don't know how to fix it.

DWPose: Onnxruntime with acceleration providers detected. Caching sessions (might take around half a minute)...
EP Error C:\Users\Administrator\Desktop\validation_1.16\onnxruntime\onnxruntime\core\session\provider_bridge_ort.cc:1193 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\onnxruntime\capi\onnxruntime_providers_openvino.dll"
when using ['OpenVINOExecutionProvider', 'CPUExecutionProvider']
Falling back to ['CPUExecutionProvider'] and retrying.
EP Error C:\Users\Administrator\Desktop\validation_1.16\onnxruntime\onnxruntime\core\session\provider_bridge_ort.cc:1193 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\onnxruntime\capi\onnxruntime_providers_openvino.dll"
when using ['OpenVINOExecutionProvider', 'CPUExecutionProvider']
Falling back to ['CPUExecutionProvider'] and retrying.

Not a double copy/paste, same error is showed 2 times like this.

Full startup:

_D:\ComfyUI_windows_portable>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build
** ComfyUI start up time: 2023-11-02 09:11:24.926201

Prestartup times for custom nodes:
0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager

Total VRAM 12287 MB, total RAM 49135 MB
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3060 : cudaMallocAsync
VAE dtype: torch.bfloat16
Using pytorch cross attention

Loading: ComfyUI-Impact-Pack (V4.28.2)

Loading: ComfyUI-Impact-Pack (Subpack: V0.3)

Loading: ComfyUI-Manager (V0.36)

ComfyUI Revision: 1636 [e73ec8c4] | Released on '2023-11-01'

Registered sys.path: ['D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\init.py', 'D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_pycocotools', 'D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_oneformer', 'D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_mmpkg', 'D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_midas_repo', 'D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_detectron2', 'D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux', 'D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src', 'D:\ComfyUI_windows_portable\ComfyUI\comfy', 'D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\git\ext\gitdb', 'D:\ComfyUI_windows_portable\ComfyUI', 'D:\ComfyUI_windows_portable\python_embeded\python310.zip', 'D:\ComfyUI_windows_portable\python_embeded', 'D:\ComfyUI_windows_portable\python_embeded\lib\site-packages', 'D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\win32', 'D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\win32\lib', 'D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\Pythonwin', 'D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules', 'D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\impact_subpack', '../..']
DWPose: Onnxruntime with acceleration providers detected. Caching sessions (might take around half a minute)...
EP Error C:\Users\Administrator\Desktop\validation_1.16\onnxruntime\onnxruntime\core\session\provider_bridge_ort.cc:1193 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\onnxruntime\capi\onnxruntime_providers_openvino.dll"
when using ['OpenVINOExecutionProvider', 'CPUExecutionProvider']
Falling back to ['CPUExecutionProvider'] and retrying.
EP Error C:\Users\Administrator\Desktop\validation_1.16\onnxruntime\onnxruntime\core\session\provider_bridge_ort.cc:1193 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\onnxruntime\capi\onnxruntime_providers_openvino.dll"
when using ['OpenVINOExecutionProvider', 'CPUExecutionProvider']
Falling back to ['CPUExecutionProvider'] and retrying.
DWPose: Sessions cached
FizzleDorf Custom Nodes: Loaded
[tinyterraNodes] Loaded

Import times for custom nodes:
0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus
0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\IPAdapter-ComfyUI
0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet
0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-animatediff
0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved
0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite
0.1 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_tinyterraNodes
0.5 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_FizzNodes
0.7 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager
1.2 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack
3.6 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux

Starting server

To see the GUI go to: http://127.0.0.1:8188
FETCH DATA from: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json_

@kyledam
Copy link

kyledam commented Nov 4, 2023

+1 I've installed TensorRT and downgraded torch to use cu118 and also reinstalled onnxruntime-gpu. InvokeAI still uses cu118, and Comfy also works normally with it.

No errors nor fallbacks: image

I did this because there's no cu121 listed here nor any of the 12.x versions: https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements

How to downgrade the Cuda from 12.1 to 11.8?

@kenny-kvibe
Copy link

kenny-kvibe commented Nov 4, 2023

+1 I've installed TensorRT and downgraded torch to use cu118 and also reinstalled onnxruntime-gpu. InvokeAI still uses cu118, and Comfy also works normally with it.
No errors nor fallbacks: image
I did this because there's no cu121 listed here nor any of the 12.x versions: https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements

How to downgrade the Cuda from 12.1 to 11.8?

activate virtual environment, uninstall torch, install torch+cu118 with command from https://pytorch.org/

  • I am not a ComfyUI nor PyTorch developer

@Layer-norm
Copy link
Contributor

Hello, this is not a error, just because TensorRT not natively support these models, maybe you can find the answer from issue#82

@Lin-coln
Copy link

Does it supports acceleration of apple silicon?

I got the information when I startup comfyUI:

/comfyui_controlnet_aux/node_wrappers/dwpose.py:24: UserWarning: DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly
  warnings.warn("DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly")

Then I install the onnxruntime-silicon which is the onnxruntime for apple silicon:

https://github.com/cansik/onnxruntime-silicon

but onnxruntime still can not be found.

  • macOS: Sonoma 14.0
  • cpu: apple M1 Pro

@skytsui
Copy link

skytsui commented Nov 22, 2023

Comfyroll Custom Nodes: Loaded
[comfyui_controlnet_aux] | INFO -> Using ckpts path: /home/sky/ComfyUI/custom_nodes/comfyui_controlnet_aux/ckpts
/home/sky/ComfyUI/custom_nodes/comfyui_controlnet_aux/node_wrappers/dwpose.py:24: UserWarning: DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly
warnings.warn("DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly")
/home/sky/ComfyUI/custom_nodes/failfast-comfyui-extensions/extensions
/home/sky/ComfyUI/web/extensions/failfast-comfyui-extensions
WAS Node Suite: BlenderNeko's Advanced CLIP Text Encode found, attempting to enable CLIPTextEncode support.
WAS Node Suite: CLIPTextEncode (BlenderNeko Advanced + NSP) node enabled under WAS Suite/Conditioning menu.
WAS Node Suite: OpenCV Python FFMPEG support is enabled
WAS Node Suite: ffmpeg_bin_path is set to: /usr/bin/ffmpeg
WAS Node Suite: Finished. Loaded 198 nodes successfully.

@dreammachineai
Copy link

dreammachineai commented Nov 26, 2023

I resolved this by installing PyTorch v11.8 side-by-side with my current CUDA (v12.3) and:

  • Reinstalling PyTorch for CUDA 11.8 within my virtual environment
pip uninstall torch torchvision torchaudio
pip install torch==2.1.1+cu118 torchvision==0.16.1+cu118 torchaudio==2.1.1+cu118 -f https://download.pytorch.org/whl/torch_stable.html
  • Installing onnxruntime-gpu
pip install onnxruntime-gpu

Now I see DWPose: Onnxruntime with acceleration providers detected 🎉

More detailed walk-through on civitai.com

@tianleiwu
Copy link

tianleiwu commented Dec 9, 2023

To use CUDA 12.* instead of 11.8, you can try install nightly binary like the following (for Python 3.8~3.11):

pip install coloredlogs flatbuffers numpy packaging protobuf sympy
pip install ort-nightly-gpu --index-url=https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ort-cuda-12-nightly/pypi/simple/

@davyzhang
Copy link

pip install onnxruntime-gpu

saved my life, thank you!

@davyzhang
Copy link

for late comer:
here's the way to enable gpu accelerate on cuda12.x

track the issue here for version changes: microsoft/onnxruntime#13932
runtime nightly: https://dev.azure.com/onnxruntime/onnxruntime/_artifacts/feed/onnxruntime-cuda-12
ort nightly:https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ort-cuda-12-nightly

#with cu12.*
pip install coloredlogs flatbuffers numpy packaging protobuf sympy
pip install ort-nightly-gpu --index-url=https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ort-cuda-12-nightly/pypi/simple/
pip install onnxruntime-gpu==1.17.0 --index-url=https://pkgs.dev.azure.com/onnxruntime/onnxruntime/_packaging/onnxruntime-cuda-12/pypi/simple/

@Fannovel16 Fannovel16 pinned this issue Jan 8, 2024
@izonewonyoung
Copy link

I resolved this by installing PyTorch v11.8 side-by-side with my current CUDA (v12.3) and:

  • Reinstalling PyTorch for CUDA 11.8 within my virtual environment
pip uninstall torch torchvision torchaudio
pip install torch==2.1.1+cu118 torchvision==0.16.1+cu118 torchaudio==2.1.1+cu118 -f https://download.pytorch.org/whl/torch_stable.html
  • Installing onnxruntime-gpu
pip install onnxruntime-gpu

Now I see DWPose: Onnxruntime with acceleration providers detected 🎉

More detailed walk-through on civitai.com

My device is RTX3080Ti which matching CUDA11.7,but I found that the onnx package only have CUDA11.8 or 11.6 version.And I follow the steps it doesn't work.What should I do?

@tianleiwu
Copy link

@izonewonyoung, pip install onnxruntime-gpu shall work with CUDA 11.6~11.8 in Windows and Linux. Please make sure you also install other dependencies like latest cuDNN for CUDA 11, and Windows need latest VC DLLs.

@Zakutu
Copy link

Zakutu commented Jan 31, 2024

EP Error A:\a_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1193 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\ComfyUI\python_embeded\Lib\site-packages\onnxruntime\capi\onnxruntime_providers_tensorrt.dll"
when using ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.      I am having the above problem when using rembg with comfyUI and it is running very slow. Has this been resolved now?

@tianleiwu
Copy link

tianleiwu commented Jan 31, 2024

@Zakutu, if you intend to use TensorRT EP, please install TensorRT 8.6.1 for CUDA 11 (since official onnxruntime-gpu is for CUDA 11 right now).

Please refer to https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/python/tools/transformers/models/stable_diffusion/. It is a demo of using TRT EP (or CUDA EP) with stable diffusion.

@techpink
Copy link

For anyone reading this thread looking for a solution for Apple Silicon, try cansik/onnxruntime-silicon.

Install:
pip install onnxruntime-silicon

On start up:

[comfyui_controlnet_aux] | INFO -> Using ckpts path: /Users/griff/ComfyUI/custom_nodes/comfyui_controlnet_aux/ckpts
[comfyui_controlnet_aux] | INFO -> Using symlinks: False
[comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider']
DWPose: Onnxruntime with acceleration providers detected

Running the DWPose Estimator on a 512x768 image (M1 Max/Sonoma 14.1.2):

DWPose: Using yolox_l.onnx for bbox detection and dw-ll_ucoco_384_bs5.torchscript.pt for pose estimation
DWPose: Caching ONNXRuntime session yolox_l.onnx...
DWPose: Caching TorchScript module dw-ll_ucoco_384_bs5.torchscript.pt on ...
DWPose: Bbox 436.91ms
DWPose: Pose 383.29ms on 1 people

@Zakutu
Copy link

Zakutu commented Feb 1, 2024

EP Error A:\a_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1193 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\ComfyUI\python_embeded\Lib\site-packages\onnxruntime\capi\onnxruntime_providers_tensorrt.dll" when using ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.      I am having the above problem when using rembg with comfyUI and it is running very slow. Has this been resolved now?

→解決しました。原因は同一環境でのpythonの競合でした。

既存のpython→3.10.6
python 3.11 cuda12.X, → error
python 3.10.6, cuda11.8 , cuDNN 最新 → OK

cu118のバージョンでcomfyUIportableを構築したら、なにもエラーや警告文無く動作しました。

▶assetから旧バージョンをダウンロードします。

https://github.com/comfyanonymous/ComfyUI/releases

@worstpractice
Copy link

worstpractice commented Feb 5, 2024

It seems CUDA 12 packages came out just three days ago (as of this writing).

All I had to do to make it work was to install the CUDA 12 version of the ONNX runtime.

Hope this helps! 🙏

Some background

I'm running:

Windows 10 Pro: 10.0.19045
Python: 3.11.6
Pip: 23.3.2
GPU: NVIDIA GeForce GTX 980 Ti (🙈)

If I activate my venv and run python -c "import torch; print(torch.__version__); print(torch.version.cuda)", I get:

2.1.2+cu121
12.1

@Layer-norm
Copy link
Contributor

Layer-norm commented Feb 6, 2024

If somebody has any wrong with onnxruntime 1.17.0 and onnxruntime-gpu 1.17.0, you can try to Install them separately (no gpu version first, then gpu version second)
#242 (comment)

@Acelya-9028
Copy link

Acelya-9028 commented Feb 6, 2024

It seems CUDA 12 packages came out just three days ago (as of this writing).

All I had to do to make it work was to install the CUDA 12 version of the ONNX runtime.

Hope this helps! 🙏

Some background

I'm running:

Windows 10 Pro: 10.0.19045
Python: 3.11.6
Pip: 23.3.2
GPU: NVIDIA GeForce GTX 980 Ti (🙈)

If I activate my venv and run python -c "import torch; print(torch.__version__); print(torch.version.cuda)", I get:

2.1.2+cu121
12.1

Thank you very much, this solved the warning !

@gateway
Copy link

gateway commented Feb 21, 2024

Anyway to get this to run on the windows comfy UI portable.. I'm running 12.3 as well..

@craftogrammer
Copy link

craftogrammer commented Feb 27, 2024

pip install onnxruntime-gpu==1.17.0 --index-url=https://pkgs.dev.azure.com/onnxruntime/onnxruntime/_packaging/onnxruntime-cuda-12/pypi/simple/

Run with python embedded command. For example mine is CUDA 12.3:

pip install coloredlogs flatbuffers numpy packaging protobuf sympy

pip install ort-nightly-gpu --index-url=https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ort-cuda-12-nightly/pypi/simple/

pip install onnxruntime-gpu==1.17.0 --index-url=https://pkgs.dev.azure.com/onnxruntime/onnxruntime/_packaging/onnxruntime-cuda-12/pypi/simple/

Check the screenshot below.

image

@tianleiwu
Copy link

See https://onnxruntime.ai/docs/install/

You can install like the following:

pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/

@f3dboys
Copy link

f3dboys commented Feb 28, 2024

for late comer: here's the way to enable gpu accelerate on cuda12.x

track the issue here for version changes: microsoft/onnxruntime#13932 runtime nightly: https://dev.azure.com/onnxruntime/onnxruntime/_artifacts/feed/onnxruntime-cuda-12 ort nightly:https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ort-cuda-12-nightly

#with cu12.*
pip install coloredlogs flatbuffers numpy packaging protobuf sympy
pip install ort-nightly-gpu --index-url=https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ort-cuda-12-nightly/pypi/simple/
pip install onnxruntime-gpu==1.17.0 --index-url=https://pkgs.dev.azure.com/onnxruntime/onnxruntime/_packaging/onnxruntime-cuda-12/pypi/simple/

THX!!

@wangwenqiao666
Copy link

I resolved this by installing PyTorch v11.8 side-by-side with my current CUDA (v12.3) and:我通过将PyTorch v11.8与我当前的CUDA(v12.3)并排安装来解决这个问题,并且:

  • Reinstalling PyTorch for CUDA 11.8 within my virtual environment在虚拟环境中重新安装PyTorch for CUDA 11.8
pip uninstall torch torchvision torchaudio
pip install torch==2.1.1+cu118 torchvision==0.16.1+cu118 torchaudio==2.1.1+cu118 -f https://download.pytorch.org/whl/torch_stable.html
  • Installing onnxruntime-gpu 安装 onnxruntime-gpu
pip install onnxruntime-gpu

Now I see DWPose: Onnxruntime with acceleration providers detected 🎉我看到了 DWPose: Onnxruntime with acceleration providers detected dispose

More detailed walk-through on civitai.com 更多详细介绍请访问civitai.com

It doesn't solve the problem

@xiexiaobo135
Copy link

Another scenario: you have installed both OnnxRuntme and OnnxRuntme-GPU versions, OnnxRuntme is run by default, just uninstall OnnxRuntme and keep the gpu version, I hope it will help you~

@biosfakemail13
Copy link

biosfakemail13 commented May 24, 2024

[Fix-Tip] long story short - it works with Cuda12! - my problem was this 3 folders:
onnxruntime
onnxruntime_gpu-1.18.0.dist-info
onnxruntime-1.18.0.dist-info
(location : venv\Lib\site-packages)

[Fix]
i just deleted all 3 of them and RE download them when in venv mod using :

  1. pip install onnxruntime-gpu
  2. pip install onnxruntime

[Why]
this bug happened when i installed other custom_nodes(in my case easy-comfy-nodes) that overwritten some of comfyui_controlnet_aux requirements (makes the - "DWPose might run very slowly" warning to re appear.

if you dont have venv (python virtual environment) installed ,close and exit comfyui then- in main confyui folder go cmd - make sure that you in main confyui directory then type this:

  1. python -m venv venv
  2. call ./venv/scripts/activate
  3. pip install onnxruntime
  4. pip install onnxruntime-gpu
  5. good luck

@MorrisLu-Taipei
Copy link

[Fix-Tip] long story short - it works with Cuda12! - my problem was this 3 folders: onnxruntime onnxruntime_gpu-1.18.0.dist-info onnxruntime-1.18.0.dist-info (location : venv\Lib\site-packages)

[Fix] i just deleted all 3 of them and RE download them when in venv mod using :

  1. pip install onnxruntime-gpu
  2. pip install onnxruntime

[Why] this bug happened when i installed other custom_nodes(in my case easy-comfy-nodes) that overwritten some of comfyui_controlnet_aux requirements (makes the - "DWPose might run very slowly" warning to re appear.

if you dont have venv (python virtual environment) installed ,close and exit comfyui then- in main confyui folder go cmd - make sure that you in main confyui directory then type this:

  1. python -m venv venv
  2. call ./venv/scripts/activate
  3. pip install onnxruntime
  4. pip install onnxruntime-gpu
  5. good luck

Not works to me

@MorrisLu-Taipei
Copy link

for late comer: here's the way to enable gpu accelerate on cuda12.x
track the issue here for version changes: microsoft/onnxruntime#13932 runtime nightly: https://dev.azure.com/onnxruntime/onnxruntime/_artifacts/feed/onnxruntime-cuda-12 ort nightly:https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ort-cuda-12-nightly

#with cu12.*
pip install coloredlogs flatbuffers numpy packaging protobuf sympy
pip install ort-nightly-gpu --index-url=https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ort-cuda-12-nightly/pypi/simple/
pip install onnxruntime-gpu==1.17.0 --index-url=https://pkgs.dev.azure.com/onnxruntime/onnxruntime/_packaging/onnxruntime-cuda-12/pypi/simple/

THX!!

Works to me , thx

@DanBurkhardt
Copy link

DanBurkhardt commented Jun 16, 2024

The following fixed the error for me on W10, using the Windows portable version for nvidia GPUs via Powershell:

  1. cd to project root

  2. run .\python_embeded\python.exe -s -m pip install onnxruntime-gpu

You have to make sure the embedded python distro (3.10) installs the dependency, hence the invocation using the embedded python .exe. It may not find the dep if installed using your command line environment.

@mufenglyf
Copy link

for late comer: here's the way to enable gpu accelerate on cuda12.x

track the issue here for version changes: microsoft/onnxruntime#13932 runtime nightly: https://dev.azure.com/onnxruntime/onnxruntime/_artifacts/feed/onnxruntime-cuda-12 ort nightly:https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ort-cuda-12-nightly

#with cu12.*
pip install coloredlogs flatbuffers numpy packaging protobuf sympy
pip install ort-nightly-gpu --index-url=https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ort-cuda-12-nightly/pypi/simple/
pip install onnxruntime-gpu==1.17.0 --index-url=https://pkgs.dev.azure.com/onnxruntime/onnxruntime/_packaging/onnxruntime-cuda-12/pypi/simple/

Thx!!!this is way

@PopHorn
Copy link

PopHorn commented Jul 4, 2024

for late comer: here's the way to enable gpu accelerate on cuda12.x

track the issue here for version changes: microsoft/onnxruntime#13932 runtime nightly: https://dev.azure.com/onnxruntime/onnxruntime/_artifacts/feed/onnxruntime-cuda-12 ort nightly:https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ort-cuda-12-nightly

#with cu12.*
pip install coloredlogs flatbuffers numpy packaging protobuf sympy
pip install ort-nightly-gpu --index-url=https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ort-cuda-12-nightly/pypi/simple/
pip install onnxruntime-gpu==1.17.0 --index-url=https://pkgs.dev.azure.com/onnxruntime/onnxruntime/_packaging/onnxruntime-cuda-12/pypi/simple/

OMFG! Thank you very much! Worked for me. 😁

@tungnguyensipher
Copy link
Contributor

For me, the issue was that I have rembg installed and rembg has onnxruntime as a dependency. This causes problems when both onnxruntime and onnxruntime-gpu are installed. Removing the non-GPU version resolved the issue, and everything works perfectly.

Note: Using pip uninstal onnxruntime l was not enough. I had to manually remove all onnxruntime related files from venv/lib/python3.10/site-packages and reinstall.

rm -rf venv/lib/python3.10/site-packages/onnxruntime* # update to match your python 
pip install onnxruntime-gpu

@tampadesignr
Copy link

The following fixed the error for me on W10, using the Windows portable version for nvidia GPUs via Powershell:

1. `cd` to project root

2. run `.\python_embeded\python.exe -s -m pip install onnxruntime-gpu`

You have to make sure the embedded python distro (3.10) installs the dependency, hence the invocation using the embedded python .exe. It may not find the dep if installed using your command line environment.

this worked for me in portable

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests