Skip to content

OOM error, not enough VRAM to run Gradio demo #40

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
clover1980 opened this issue Apr 13, 2025 · 1 comment
Open

OOM error, not enough VRAM to run Gradio demo #40

clover1980 opened this issue Apr 13, 2025 · 1 comment

Comments

@clover1980
Copy link

So, i've managed to install it (only WAN version) on Windows 10 (it was quite hard there than on Linux) and on Linux separately and in both it started Gradio demo but errored by the deficit of GPU VRAM (OOM, i have 4070 Su with 16Gb). As i remember the logs, it lacks something like 80Mb to run the prompt in Gradio (all other VRAM was used by Pytorch).
So, how much VRAM exactly you need to try Gradio demo?
Or is there way to run it on CPU only with system RAM at least?

@hanzhn
Copy link
Collaborator

hanzhn commented Apr 14, 2025

Sorry, Gradio demo is not optimized for low-VRAM envs.

However, CIL script vace/vace_wan_inference.py inherits all the features from Wan2.1, and you can use all the options like --offload_model True and --t5_cpu.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants