Skip to content

AlonzoLeeeooo/OmniGen2

Repository files navigation

project page arxiv demo model model model

OmniGen2 is a powerful and efficient unified multimodal model. Its architecture is composed of two key components: a 3B Vision-Language Model (VLM) and a 4B diffusion model. In this design, the frozen 3B VLM (Qwen-VL-2.5) is responsible for interpreting both visual signals and user instructions, while the 4B diffusion model leverages this understanding to perform high-quality image generation.

This dual-component architecture enables strong performance across four primary capabilities:

  • Visual Understanding: Inherits the robust ability to interpret and analyze image content from its Qwen-VL-2.5 foundation.
  • Text-to-Image Generation: Creates high-fidelity and aesthetically pleasing images from textual prompts.
  • Instruction-guided Image Editing: Executes complex, instruction-based image modifications with high precision, achieving state-of-the-art performance among open-source models.
  • In-context Generation: A versatile capability to process and flexibly combine diverse inputs—including tasks, reference objects, and scenes—to produce novel and coherent visual outputs.

As an open-source project, OmniGen2 provides a powerful yet resource-efficient foundation for researchers and developers exploring the frontiers of controllable and personalized generative AI.

We will release the training code, dataset, and data construction pipeline soon. Stay tuned!


Demonstration of OmniGen2's overall capabilities.


Demonstration of OmniGen2's image editing capabilities.


Demonstration of OmniGen2's in-context generation capabilities.

🔥 News

  • 2025-06-16: Gradio and Jupyter demo is available.
  • 2025-06-16: We release OmniGen2, a multimodal generation model, model weights can be accessed in huggingface.

📌 TODO

  • Technical report.
  • In-context generation benchmark: OmniContext.
  • Support CPU offload and improve inference efficiency.
  • Training data and scripts.
  • Data construction pipeline.
  • ComfyUI Demo (commuity support will be greatly appreciated!).

🚀 Quick Start

🛠️ Environment Setup

✅ Recommended Setup

# 1. Clone the repo
git clone git@github.com:VectorSpaceLab/OmniGen2.git
cd OmniGen2

# 2. (Optional) Create a clean Python environment
conda create -n omnigen2 python=3.11
conda activate omnigen2

# 3. Install dependencies
# 3.1 Install PyTorch (choose correct CUDA version)
pip install torch==2.6.0 torchvision --extra-index-url https://download.pytorch.org/whl/cu124

# 3.2 Install other required packages
pip install -r requirements.txt
pip install flash-attn --no-build-isolation

🌏 For users in Mainland China

# Install PyTorch from a domestic mirror
pip install torch==2.6.0 torchvision --index-url https://mirror.sjtu.edu.cn/pytorch-wheels/cu124

# Install other dependencies from Tsinghua mirror
pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install flash-attn --no-build-isolation -i https://pypi.tuna.tsinghua.edu.cn/simple

🧪 Run Examples

# Visual Understanding
bash example_understanding.sh

# Text-to-image generation
bash example_t2i.sh

# Instruction-guided image editing
bash example_edit.sh

# Subject-driven image editing
bash example_subject_driven_edit.sh

🌐 Gradio Demo

  • Run Locally:
    pip install gradio
    python app.py
    # Optional: Share demo with public link (You need to be able to access huggingface)
    python app.py --share

💡 Usage Tips

To achieve optimal results with OmniGen2, you can adjust the following key hyperparameters based on your specific use case.

  • num_inference_step: The number of sampling steps per generation. Higher values generally improve quality but increase generation time.
    • Recommended Range: 28 to 50
  • text_guidance_scale: Controls how strictly the output adheres to the text prompt (Classifier-Free Guidance).
    • For Text-to-Image: Use a higher value (e.g., 6-7) for simple or less detailed prompts. Use a lower value (e.g., 4) for complex and highly detailed prompts.
    • For Editing/Composition: A moderate value around 4-5 is recommended.
  • image_guidance_scale: This controls how much the final image should resemble the input reference image.
    • The Trade-off: A higher value (~2.0) makes the output more faithful to the reference image's structure and style, but it might ignore parts of your text prompt. A lower value (~1.5) gives the text prompt more influence.
    • Tip: Start with 1.5 and increase it if you need more consistency with the reference image. For image editing task, we recommend to set it between 1.3 and 2.0; for in-context generateion task, a higher image_guidance_scale will maintian more details in input images, and we recommend to set it between 2.5 and 3.0.
  • max_pixels: Automatically resizes images when their total pixel count (width × height) exceeds this limit, while maintaining its aspect ratio. This helps manage performance and memory usage.
  • max_input_image_side_length: Maximum side length for input images.
  • negative_prompt: Tell the model what you don't want to see in the image.
    • Example: blurry, low quality, text, watermark
    • Tip: For the best results, try experimenting with different negative prompts. If you're not sure, just leave it blank.

❤️ Citing Us

If you find this repository or our work useful, please consider giving a star ⭐ and citation 🦖, which would be greatly appreciated (OmniGen2 report will be available as soon as possible):

@article{xiao2024omnigen,
  title={Omnigen: Unified image generation},
  author={Xiao, Shitao and Wang, Yueze and Zhou, Junjie and Yuan, Huaying and Xing, Xingrun and Yan, Ruiran and Wang, Shuting and Huang, Tiejun and Liu, Zheng},
  journal={arXiv preprint arXiv:2409.11340},
  year={2024}
}

License

This work is licensed under Apache 2.0 license.

About

OmniGen2

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 5