You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
| VACE-Wan2.1-1.3B-Preview |[Huggingface](https://huggingface.co/ali-vilab/VACE-Wan2.1-1.3B-Preview) 🤗 [ModelScope](https://modelscope.cn/models/iic/VACE-Wan2.1-1.3B-Preview) 🤖 | ~ 81 x 480 x 832 |[Apache-2.0](https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B/blob/main/LICENSE.txt)|
44
-
| VACE-Wan2.1-1.3B |[To be released](https://github.com/Wan-Video) <imgsrc='https://ali-vilab.github.io/VACE-Page/assets/logos/wan_logo.png'alt='wan_logo'style='margin-bottom: -4px; height: 15px;'> | ~ 81 x 480 x 832 |[Apache-2.0](https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B/blob/main/LICENSE.txt)|
45
-
| VACE-Wan2.1-14B |[To be released](https://github.com/Wan-Video) <imgsrc='https://ali-vilab.github.io/VACE-Page/assets/logos/wan_logo.png'alt='wan_logo'style='margin-bottom: -4px; height: 15px;'> | ~ 81 x 720 x 1080 |[Apache-2.0](https://huggingface.co/Wan-AI/Wan2.1-T2V-14B/blob/main/LICENSE.txt)|
46
44
| VACE-LTX-Video-0.9 |[Huggingface](https://huggingface.co/ali-vilab/VACE-LTX-Video-0.9) 🤗 [ModelScope](https://modelscope.cn/models/iic/VACE-LTX-Video-0.9) 🤖 | ~ 97 x 512 x 768 |[RAIL-M](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltx-video-2b-v0.9.license.txt)|
45
+
| Wan2.1-VACE-1.3B |[To be released](https://github.com/Wan-Video) <imgsrc='https://ali-vilab.github.io/VACE-Page/assets/logos/wan_logo.png'alt='wan_logo'style='margin-bottom: -4px; height: 15px;'> | ~ 81 x 480 x 832 |[Apache-2.0](https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B/blob/main/LICENSE.txt)|
46
+
| Wan2.1-VACE-14B |[To be released](https://github.com/Wan-Video) <imgsrc='https://ali-vilab.github.io/VACE-Page/assets/logos/wan_logo.png'alt='wan_logo'style='margin-bottom: -4px; height: 15px;'> | ~ 81 x 720 x 1080 |[Apache-2.0](https://huggingface.co/Wan-AI/Wan2.1-T2V-14B/blob/main/LICENSE.txt)|
47
47
48
48
- The input supports any resolution, but to achieve optimal results, the video size should fall within a specific range.
49
49
- All models inherit the license of the original model.
@@ -118,20 +118,20 @@ The output video together with intermediate video, mask and images will be saved
118
118
119
119
#### 2) Preprocessing
120
120
To have more flexible control over the input, before VACE model inference, user inputs need to be preprocessed into `src_video`, `src_mask`, and `src_ref_images` first.
121
-
We assign each [preprocessor](vace/configs/__init__.py) a task name, so simply call [`vace_preprocess.py`](vace/vace_preproccess.py) and specify the task name and task params. For example:
121
+
We assign each [preprocessor](./vace/configs/__init__.py) a task name, so simply call [`vace_preprocess.py`](./vace/vace_preproccess.py) and specify the task name and task params. For example:
The outputs will be saved to `./proccessed/` by default.
129
+
The outputs will be saved to `./processed/` by default.
130
130
131
131
> 💡**Note**:
132
132
> Please refer to [run_vace_pipeline.sh](./run_vace_pipeline.sh) preprocessing methods for different tasks.
133
-
Moreover, refer to [vace/configs/](vace/configs/) for all the pre-defined tasks and required params.
134
-
You can also customize preprocessors by implementing at [`annotators`](vace/annotators/__init__.py) and register them at [`configs`](vace/configs).
133
+
Moreover, refer to [vace/configs/](./vace/configs/) for all the pre-defined tasks and required params.
134
+
You can also customize preprocessors by implementing at [`annotators`](./vace/annotators/__init__.py) and register them at [`configs`](./vace/configs).
The output video together with intermediate video, mask and images will be saved into `./results/` by default.
151
151
152
152
> 💡**Note**:
153
-
> (1) Please refer to [vace/vace_wan_inference.py](vace/vace_wan_inference.py) and [vace/vace_ltx_inference.py](vace/vace_ltx_inference.py) for the inference args.
153
+
> (1) Please refer to [vace/vace_wan_inference.py](./vace/vace_wan_inference.py) and [vace/vace_ltx_inference.py](./vace/vace_ltx_inference.py) for the inference args.
154
154
> (2) For LTX-Video and English language Wan2.1 users, you need prompt extension to unlock the full model performance.
155
155
Please follow the [instruction of Wan2.1](https://github.com/Wan-Video/Wan2.1?tab=readme-ov-file#2-using-prompt-extension) and set `--use_prompt_extend` while running inference.
156
-
156
+
> (3) When performing prompt extension in editing tasks, it's important to pay attention to the results of expanding plain text. Since the visual information being input is unknown, this may lead to the extended output not matching the video being edited, which can affect the final outcome.
157
157
158
158
### Inference Gradio
159
159
For preprocessors, run
@@ -182,4 +182,4 @@ We are grateful for the following awesome projects, including [Scepter](https://
182
182
author = {Jiang, Zeyinzi and Han, Zhen and Mao, Chaojie and Zhang, Jingfeng and Pan, Yulin and Liu, Yu},
0 commit comments