Skip to content

quickgrid/distill-sd

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

23 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

distill-sd

Experiment with inference of pretrained ldm model and try distill knowledge to smaller network.

Initial poor buggy implementation with working text to image and image to image inference. Lower vram requirement than original repo. Try to process parts of model in cpu and rest in gpu if chosen to reduce vram (not properly tested yet). Negative prompt added not sure if works.

Training code removed. Training code from this and this repo should work.

Install

pip install -e .

Flags (not implemented)

--tomesd enables tomesd to reduce gpu memory consumption.

--tome_ratio range from 0 to 1 with default 0.5.

Demo

Text to Image

"a car in shape of carrot flying through solar system, high quality, 4k"

Image to Image

Guided also by positive text and negative text prompt if provided.

a car in shape of carrot flying through solar system, high quality, 4k

Conditional Inpainting

an astronaut floating in space, high quality, 4k

moon planet in background, high quality, 4k

Pretrained Models

Tested on SD 1.4, 1.5 and 1.5 inpainting model.

References