π€ Generate images with diffusion models:
diffused <model> <prompt>
pipx run diffused segmind/tiny-sd "red apple"
pipx run diffused OFA-Sys/small-stable-diffusion-v0 "cat wizard" --image=https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png
pipx run diffused kandinsky-community/kandinsky-2-2-decoder-inpaint "black cat" --image=https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png --mask-image=https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png
Install the CLI:
pipx install diffused
Required (str): The diffusion model.
diffused segmind/SSD-1B "An astronaut riding a green horse"
See segmind/SSD-1B.
Required (str): The text prompt.
diffused dreamlike-art/dreamlike-photoreal-2.0 "cinematic photo of Godzilla eating sushi with a cat in a izakaya, 35mm photograph, film, professional, 4k, highly detailed"
Optional (str): What to exclude from the output image.
diffused stabilityai/stable-diffusion-2 "photo of an apple" --negative-prompt="blurry, bright photo, red"
With the short option:
diffused stabilityai/stable-diffusion-2 "photo of an apple" -np="blurry, bright photo, red"
Optional (str): The input image path or URL. The initial image is used as a starting point for an image-to-image diffusion process.
diffused stabilityai/stable-diffusion-xl-refiner-1.0 "astronaut in a desert" --image=https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png
With the short option:
diffused stabilityai/stable-diffusion-xl-refiner-1.0 "astronaut in a desert" -i=./local/image.png
Optional (str): The mask image path or URL. Inpainting replaces or edits specific areas of an image. Create a mask image to inpaint images.
diffused kandinsky-community/kandinsky-2-2-decoder-inpaint "black cat" --image=https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png --mask-image=https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png
With the short option:
diffused kandinsky-community/kandinsky-2-2-decoder-inpaint "black cat" -i=inpaint.png -mi=inpaint_mask.png
Optional (str): The output image filename.
diffused dreamlike-art/dreamlike-photoreal-2.0 "cat eating sushi" --output=cat.jpg
With the short option:
diffused dreamlike-art/dreamlike-photoreal-2.0 "cat eating sushi" -o=cat.jpg
Optional (int): The output image width in pixels.
diffused stabilityai/stable-diffusion-xl-base-1.0 "dog in space" --width=1024
With the short option:
diffused stabilityai/stable-diffusion-xl-base-1.0 "dog in space" -W=1024
Optional (int): The output image height in pixels.
diffused stabilityai/stable-diffusion-xl-base-1.0 "dog in space" --height=1024
With the short option:
diffused stabilityai/stable-diffusion-xl-base-1.0 "dog in space" -H=1024
Optional (int): The number of output images. Defaults to 1.
diffused segmind/tiny-sd apple --number=2
With the short option:
diffused segmind/tiny-sd apple -n=2
Optional (int): How much the prompt influences the output image. A lower value leads to more deviation and creativity, whereas a higher value follows the prompt to a tee.
diffused stable-diffusion-v1-5/stable-diffusion-v1-5 "astronaut in a jungle" --guidance-scale=7.5
With the short option:
diffused stable-diffusion-v1-5/stable-diffusion-v1-5 "astronaut in a jungle" -gs=7.5
Optional (int): The number of diffusion steps used during image generation. The more steps you use, the higher the quality, but the generation time will increase.
diffused CompVis/stable-diffusion-v1-4 "astronaut rides horse" --inference-steps=50
With the short option:
diffused CompVis/stable-diffusion-v1-4 "astronaut rides horse" -is=50
Optional (float): The noise added to the input image, which determines how much the output image deviates from the original image. Strength is used for image-to-image and inpainting tasks and is a multiplier to the number of denoising steps (--inference-steps
).
diffused stabilityai/stable-diffusion-xl-refiner-1.0 "astronaut in swamp" --image=https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-sdxl-init.png --strength=0.5
With the short option:
diffused stabilityai/stable-diffusion-xl-refiner-1.0 "astronaut in swamp" -i=image.png -s=0.5
Optional (int): The seed for generating random numbers, ensuring reproducibility in image generation pipelines.
diffused stable-diffusion-v1-5/stable-diffusion-v1-5 "Labrador in the style of Vermeer" --seed=0
With the short option:
diffused stable-diffusion-v1-5/stable-diffusion-v1-5 "Labrador in the style of Vermeer" -S=1337
Optional (str): The device to accelerate the computation (cpu
, cuda
, mps
, xpu
, xla
, or meta
).
diffused stable-diffusion-v1-5/stable-diffusion-v1-5 "astronaut on earth, 8k" --device=cuda
With the short option:
diffused stable-diffusion-v1-5/stable-diffusion-v1-5 "astronaut on earth, 8k" -d=cuda
Optional (bool): Whether to disable safetensors.
diffused runwayml/stable-diffusion-v1-5 "astronaut on mars" --no-safetensors
Show the program's version number and exit:
diffused --version # diffused -v
Show the help message and exit:
diffused --help # diffused -h
Create a virtual environment:
python3 -m venv .venv
Activate the virtual environment:
source .venv/bin/activate
Install the package:
pip install diffused
Generate an image with a model and a prompt:
# script.py
from diffused import generate
images = generate(model="segmind/tiny-sd", prompt="apple")
images[0].save("apple.png")
Run the script:
python script.py
Open the image:
open apple.png
See the API documentation.