You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So, i've managed to install it (only WAN version) on Windows 10 (it was quite hard there than on Linux) and on Linux separately and in both it started Gradio demo but errored by the deficit of GPU VRAM (OOM, i have 4070 Su with 16Gb). As i remember the logs, it lacks something like 80Mb to run the prompt in Gradio (all other VRAM was used by Pytorch).
So, how much VRAM exactly you need to try Gradio demo?
Or is there way to run it on CPU only with system RAM at least?
The text was updated successfully, but these errors were encountered:
Sorry, Gradio demo is not optimized for low-VRAM envs.
However, CIL script vace/vace_wan_inference.py inherits all the features from Wan2.1, and you can use all the options like --offload_model True and --t5_cpu.
So, i've managed to install it (only WAN version) on Windows 10 (it was quite hard there than on Linux) and on Linux separately and in both it started Gradio demo but errored by the deficit of GPU VRAM (OOM, i have 4070 Su with 16Gb). As i remember the logs, it lacks something like 80Mb to run the prompt in Gradio (all other VRAM was used by Pytorch).
So, how much VRAM exactly you need to try Gradio demo?
Or is there way to run it on CPU only with system RAM at least?
The text was updated successfully, but these errors were encountered: