About Olive support. #149
Replies: 36 comments 141 replies
-
I get an error. I will delete the comment when fixed, or where else to put it? The Commands dont fix it. import torch_directml |
Beta Was this translation helpful? Give feedback.
-
Bruh the model conversion is confusing, I lost my models and outputs on their folders. Then even i use XUI to convert my models it still won't work because it requires opt_config.json. And then one model (majicmix reality) is working which i take from huggingface but giving only black squares. |
Beta Was this translation helpful? Give feedback.
-
Here are some errors that I encountered when using Olive:
console log
console log
console log
|
Beta Was this translation helpful? Give feedback.
-
Ive seen multiple updates for Olive.py over the last weeks, could you post any changelogs or something? I am very interested in Olive, since its way better for VRAM. |
Beta Was this translation helpful? Give feedback.
-
been trying to do this for 3 days now.,, still no luck. cant really understand where does the "Checkpoint file name" pul from so i can put my current folder from that folder?
|
Beta Was this translation helpful? Give feedback.
-
hi @lshqqytiger after successful conversion and optimization generating ang image gives me this error: NotImplemented: [ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Failed to find kernel for GroupNorm(1) (node GroupNorm_0). Kernel not found |
Beta Was this translation helpful? Give feedback.
-
hi @lshqqytiger what does this error mean? Creating ONNX pipeline... |
Beta Was this translation helpful? Give feedback.
-
Still having this unet Error: ERROR LOG: *** Error completing request ==================================================== COMPLETE LOG AFTER ONNX OPTIMIZATION: Downloading (…)ain/model_index.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 541/541 [00:00<00:00, 462kB/s] Optimizing text_encoder Optimizing unet Optimizing vae_decoder Optimizing vae_encoder Creating ONNX pipeline... |
Beta Was this translation helpful? Give feedback.
-
Getting this error when trying to optimize any model
|
Beta Was this translation helpful? Give feedback.
-
im so impressed with your work! Where can i donate to you? wanna send 10€ via paypal! |
Beta Was this translation helpful? Give feedback.
-
I'm getting an import error, but it's when starting up with olive. I tried conda list and it says i'm actually on version 21.0 for accelerate, but everytime I run this, it says i'm on 18..? I did downgrade to torch 1.13.1 and still get the same thing. It is important to note the guide i followed at first was this "https://community.amd.com/t5/gaming/how-to-running-optimized-automatic1111-stable-diffusion-webui-on/ba-p/625585" When I get to step 4, right after I run 'python stable_diffusion.py --interactive --num_images 2' (that worked too for clarification) I then follow these steps & get the error above. 'Run the Automatic1111 WebUI with the Optimized Model Launch a new Anaconda/Miniconda terminal window as soon as I run webui.bat --onnx --backend directml Any help would be appreciated. thanks a million. edit: I added where i found instructions. |
Beta Was this translation helpful? Give feedback.
-
Okay... I don't know how, why, when , who, what universe, and what kind of magic changed... But I can run it with 'python launch.py --onnx --backend directml --skip-install' now. I need to investigate and play with it further but it's working now. I did randomly get curious and typed 'pip install accelerate==0.20.3' and it said the below part (which makes no sense). ANd then it gave me that 18.0 error all over again... I don't understand, AT ALL. but I will continue to test and see what comes of it... Thanks again! |
Beta Was this translation helpful? Give feedback.
-
Getting this error whenever I try to optimize a model
|
Beta Was this translation helpful? Give feedback.
-
I previously created an olive onxx version of standard 1.5 sd model, and was able to successfully use it once I copied it to "ONNX-Olive" folder, now there are a few errors before the first run but it seems everything is working allright, can I keep using this or should I recreate from scratch ?
Also is there any way or some place for people to get pre converted onnx models ? It would be very convenient. |
Beta Was this translation helpful? Give feedback.
-
I'd be happy to optimize some SDXL models and share, but I haven't had any luck yet. I don't understand how to setup the optimize screens, I suppose - do I include the full folder path? Only the names? Basic info: Model I want to optimize is here: The model's name is I successfully used the Olive tab with the following config in the Olive -> Optimize Checkpoint: Console printed:
Now, moving on to the Optimize ONNX model tab, I can't get this step to work correctly. I have the config like this. I assume the OUTPUT of the previous step should become the INPUT of this step, and the OUTPUT of this step will be a new folder where the final model is stored? At first pass, I get these errors:
I see that it says torch_dytype=torch.float32 requires safety checkfer false. I uncheck Safety Checker and try again, resulting in the following:
A bit of a guess, so I try again and uncheck "Use Half Floats". Resulting in the following error message:
Thank you for any help you can provide. |
Beta Was this translation helpful? Give feedback.
-
안녕하세요 @lshqqytiger, 올리브 사용할려고 하는데 에러가 계속 뜨네요. torch_directml_native 에러 뜨면 requirements_olive.txt가 이제 리포에 없는데 혹시 이것때문인가요? |
Beta Was this translation helpful? Give feedback.
-
I get the following when trying to convert an sdxl model on a 6950xt: nloaded weights 0.0s. text encoder 2 is checked. the vae is in the vae folder. Not sure what else to do. Thank you. |
Beta Was this translation helpful? Give feedback.
-
Hi. Firstly, thank you for your work on this endeavour. Quick query, set up successful, conversion successful, generation time ca. 18.6its/s to 20.2its/s @ 768x768, x4 upscale, about 1.2 sec per generation. Issue is, batch size or batch count can't be increased without triggering the below; InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Got invalid dimensions for input: encoder_hidden_states for the following indices index: 0 Got: 4 Expected: 2 Please fix either the inputs or the model. Any advice, for a complete novice, would be most gratefully accepted. Thanks a million. |
Beta Was this translation helpful? Give feedback.
-
win11 6800xt Creating ONNX pipeline... |
Beta Was this translation helpful? Give feedback.
-
Is it expected that running with --nowebui (running API), does not work? I didn't see it in the features list, so maybe so. I get an Internal Server Error 500 when "GET /sdapi/v1/samplers HTTP/1.1" is called. I can see models and options. Cannot "POST /sdapi/v1/txt2img HTTP/1.1" due to 404 Not Found. |
Beta Was this translation helpful? Give feedback.
-
After doing some digging and realizing you need to use ARG --onnx and not --olive as stated in the Announcement, Olive/ONNX is working well, thank you. However, I'm noticing that Generation Info is not being populated into the output images nor text file though it should be set in the Settings. Is this a known issue? |
Beta Was this translation helpful? Give feedback.
-
Currently, I'm focusing on integrating ONNX and Olive into SD.Next. If you want more updated version and nice compatibility/functionality, I recommend using the Because Olive does not support the latest version of PyTorch, you should downgrade them.
|
Beta Was this translation helpful? Give feedback.
-
I'm using a 6900 XT and having some issues when I try to optimize SD-XL-base-1.0 on Windows 11, I get these error's: *** Arguments: ('sd_xl_base_1.0_0.9vae.safetensors', '', 'vae', 'stabilityai/stable-diffusion-xl-base-1.0', 'stabilityai/stable-diffusion-xl-base-1.0', 'runwayml/stable-diffusion-v1-5', '', 'vae', 'stable-diffusion-v1-5', 'stable-diffusion-v1-5', True, True, True, True, True, False, True, True, True, True, 'euler', True, 1024, False, '', '', '') {}
Traceback (most recent call last):
File "C:\Users\markv\git\stable-diffusion-webui-directml\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "C:\Users\markv\git\stable-diffusion-webui-directml\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "C:\Users\markv\git\stable-diffusion-webui-directml\modules\ui.py", line 2080, in optimize
return optimize_sdxl_from_ckpt(
File "C:\Users\markv\git\stable-diffusion-webui-directml\modules\sd_olive_ui.py", line 155, in optimize_sdxl_from_ckpt
optimize(
File "C:\Users\markv\git\stable-diffusion-webui-directml\modules\sd_olive_ui.py", line 352, in optimize
with footprints_file_path.open("r") as footprint_file:
File "C:\Users\markv\.conda\envs\Automatic1111\lib\pathlib.py", line 1119, in open
return self._accessor.open(self, mode, buffering, encoding, errors,
FileNotFoundError: [Errno 2] No such file or directory: 'footprints\\unet_gpu-dml_footprints.json'
|
Beta Was this translation helpful? Give feedback.
-
@lshqqytiger Brother, there are no PNG info in every image generations. And no textual inversion file (.pt) are being loaded from Embeddings folder. |
Beta Was this translation helpful? Give feedback.
-
Today, I refactored whole onnx/olive codes. Please follow this instruction. (I updated original discussion comment too) Instruction
※ If you want img2img, change Extra instruction for DirectML users.
|
Beta Was this translation helpful? Give feedback.
-
Just to be clear - I want absolutely NOTHING to do with Onnx/Olive, etc. We appreciate your efforts though. I just want to run SD with DirectML like I was in previous versions. Do these special instructions for DirectML still apply? |
Beta Was this translation helpful? Give feedback.
-
Odd... OK, will try again after work. I tried last night and didn't capture the error, but keeping --use-directml is what originally resulted in me having the error here: |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
python launch.py --use-directml --skip-version-check --medvram To create a public link, set How do i solve this problem? i am not very adept in python and also kinda new to github in general |
Beta Was this translation helpful? Give feedback.
-
Hello everyone, i'd like to use SDXL, i sucessfully use 1.5 pruned provided model but not SDXL, is there clear steps anywhere on how to add models working with olive ? i can't figure it out ... Many thanks. |
Beta Was this translation helpful? Give feedback.
-
I added the Olive support in ab1ef3e.
Requirements
RAM (not VRAM): 32GB recommended.
VRAM: 8GB recommended. At least 6GB.
Features
--device-id
Instruction
olive-ai
and requirements.Use ONNX Runtime instead of PyTorch implementation
.Execution Provider
to proper one.Enable Olive
.Olive models to process
.※ If you want img2img, change
Diffusers pipeline
toONNX Stable Diffusion Img2Img
.Extra instruction for DirectML users.
onnxruntime-directml
manually via running these commands below.--use-cpu-torch
.Execution Provider
toDmlExecutionProvider
.About LoRA
You should merge LoRAs into the model before the optimization.
Performance
512x512 on RX 5700 XT
Before: 1.5it/s
After Olive: 2.03it/s
512x512 on RX 7900 XTX
Before: 5it/s
With ONNX: 7it/s
Olive without static dims: 11it/s
Olive with static dims: 22it/s
FAQ
I can't find my execution provider under
Execution Provider
option.Reinstall onnxruntime and onnxruntime-... packages via running these commands below.
For example, if you want
DmlExecutionProvider
, runBeta Was this translation helpful? Give feedback.
All reactions