You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To help ease the difficulties of downloading, installing and updating enfugue, a new installation method and execution method has been developed. This script is a one-and-done shell script that will prompt you for any options you will need to set. Installation is as follows:
You will be prompted when a new version of enfugue is available, and it will be automatically downloaded for you. Execute enfugue.sh -h to see command-line options. Open the file with a text editor to view configuration options and additional instructions.
New Features
1. LCM - Latent Consistency Models
An image and animation made with LCM, taking 1 and 14 seconds to generate respectively.
Latent Consistency Models are a method for performing inference in only a small handful of steps, with minimal reduction in quality.
To use LCM in Enfugue, take the following steps:
In More Model Configuration, add the appropriate LoRA for your currently selected checkpoint. This is recommended to be set at exactly 1.0 weight.
Change your scheduler to LCM Scheduler.
Reduce your guidance scale to between 1.1 and 1.4 - 1.2 is a good start.
Reduce your inference steps to between 3 and 8 - 4 is a good start.
Disable tiled diffusion and VAE; it performs poorly with the LCM scheduler.
If you're using animation, disable frame attention slicing, or switch to a different scheduler like Euler Discrete - you can use other schedulers with LCM, too!
You may find LCM does not do well with fine structures like faces and hands. To help address this, you can either upscale as I have here, or use next new feature.
2. Detailer
Left to right: base image, with face fix, with face fix and inpaint.
Enfugue now has a version of Automatic1111's ADetailer (After Detailer.) This allows you to configure a detailing pass after each image generation that can:
Use face restoration to make large modifications to faces to make them appear more natural.
In addition to (or instead of) the above, you can automatically perform an inpainting pass over faces on the image. This will give Stable Diffusion a chance to add detail back to faces and make them blend in better with the rest of the image style. This is best used in conjunction with the above.
In addition to the above, you can also identify and inpaint hands. This can fix human hands that are broken or inaccurate.
Finally, you can perform a final denoising pass over the whole image. This can help make the final fixed image more coherent.
This works very well when combined with LCM, which can perform the inpainting and final denoising passes in a single step, offsetting the difficulty that LCM sometimes has with these subjects.
3. Themes
The included themes.
Enfugue now has themes. These are always available from the menu.
Select from the original enfugue theme, five different colored themes, two monochrome themes, and the ability to set your own custom theme.
4. Opacity Slider, Simpler Visibility Options
Stacking two denoised images on top of one another, and the resulting animation.
An opacity slider has been added to the layer options menu. When used, this will make the image or video partially transparent in the UI. In addition, if the image is in the visible input layer, it will be made transparent when merged there, as well.
To make it more clear what images are and are not visible to Stable Diffusion, the "Denoising" image role has been replaced with a "Visibility" dropdown. This has three options:
Invisible - The image is not visible to Stable Diffusion. It may still be used for IP Adapter and/or ControlNet.
Visible - The image is visible to stable diffusion. The alpha channel of the image is not added to the painting mask.
Denoised - The image is visible to stable diffusion. The alpha channel of the image is added to the painting mask.
To help illustrate these options and how inpainting/outpainting work, consider the following examples.
5. Generic Model Downloader
The Download Model UI.
To help bridge the gap when it comes to external service integrations, there is now a generic "Download Models" menu in Enfugue. This will allow you to enter a URL to a model hosted anywhere on the internet, and have Enfugue download it to the right location for that model type.
6. Model Metadata Viewer
The metadata viewer showing a result from CivitAI.
When using any field that allows selecting from different AI models, there is now a magnifying glass icon. When clicked, this will present you with a window containing the CivitAI metadata for that model.
This does not require the metadata be saved prior to viewing. If the model does not exist in CivitAI's database, no metadata will be available.
7. More Scheduler Configuration
The more scheduler configuration UI.
Next to the scheduler selector is a small gear icon. When clicked, this will present you with a window allowing for advanced scheduler configuration.
These values should not need to be tweaked in general. However, some new animation modules are trained using different values for these configurations, so they have been exposed to allow using these models effectively in Enfugue.
If you're on Linux, it's recommended to use the new automated installer. See the top of this document for those instructions. For Windows users or anyone not using the automated installer, read below.
First decide how you'd like to install, either a portable distribution, or through conda.
Conda will install all enfugue dependencies separated. This is the recommended installation method, as it will ensure the highest compatibility with your hardware, and makes for easy and fast updates.
A portable distribution comes with all dependencies in one directory, with an executable binary.
Download the three files above that make up the entire archive, then extract them. To extract these files, you must concatenate them. Rather than taking up space in your file system, you can simply stream them together to tar. A console command to do that is:
cat enfugue-server-0.3.1*| tar -xvz
You are now ready to run the server with:
./enfugue-server/enfugue.sh
Press Ctrl+C to exit.
Windows
Download the win64 files here, and extract them using a program which allows extracting from multiple archives such as 7-Zip.
If you are using 7-Zip, you should not extract both files independently. If they are in the same directory when you unzip the first, 7-Zip will automatically unzip the second. The second file cannot be extracted on its own.
Locate the file enfugue-server.exe, and double-click it to run it. To exit, locate the icon in the bottom-right hand corner of your screen (the system tray) and right-click it, then select Quit.
Installing and Running: Conda
To install with the provided Conda environments, you need to install a version of Conda.
After installing Conda and configuring it so it is available to your shell or command-line, download one of the environment files depending on your platform and graphics API.
First, choose windows-, linux- or macos- based on your platform.
Then, choose your graphics API:
If you are on MacOS, you only have access to MPS.
If you have an Nvidia GPU or other CUDA-compatible device, select cuda.
Additional graphics APIs (rocm and directml) are being added and will be made available as they are developed. Please voice your desire for these to prioritize their development.
Finally, using the file you downloaded, create your Conda environment:
conda env create -f <downloaded_file.yml>
You've now installed Enfugue and all dependencies. To run it, activate the environment and then run the installed binary.
conda activate enfugue
python -m enfugue run
Optional: DWPose Support
To install DW Pose support (a better, faster pose and face detection model), after installing Enfugue, execute the following (MacOS, Linux or Windows):
If you would like to manage dependencies yourself, or want to install Enfugue into an environment to share with another Stable Diffusion UI, you can install enfugue via pip. This is the only method available for AMD GPU's at present.
pip install enfugue
If you are on Linux and want TensorRT support, execute:
pip install enfugue[tensorrt]
If you are on Windows and want TensorRT support, follow the steps detailed here.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
New Linux Installation Method
To help ease the difficulties of downloading, installing and updating enfugue, a new installation method and execution method has been developed. This script is a one-and-done shell script that will prompt you for any options you will need to set. Installation is as follows:
You will be prompted when a new version of enfugue is available, and it will be automatically downloaded for you. Execute
enfugue.sh -h
to see command-line options. Open the file with a text editor to view configuration options and additional instructions.New Features
1. LCM - Latent Consistency Models
An image and animation made with LCM, taking 1 and 14 seconds to generate respectively.
Latent Consistency Models are a method for performing inference in only a small handful of steps, with minimal reduction in quality.
To use LCM in Enfugue, take the following steps:
1.1
and1.4
-1.2
is a good start.3
and8
-4
is a good start.You may find LCM does not do well with fine structures like faces and hands. To help address this, you can either upscale as I have here, or use next new feature.
2. Detailer
Left to right: base image, with face fix, with face fix and inpaint.
Enfugue now has a version of Automatic1111's ADetailer (After Detailer.) This allows you to configure a detailing pass after each image generation that can:
This works very well when combined with LCM, which can perform the inpainting and final denoising passes in a single step, offsetting the difficulty that LCM sometimes has with these subjects.
3. Themes
The included themes.
Enfugue now has themes. These are always available from the menu.
Select from the original enfugue theme, five different colored themes, two monochrome themes, and the ability to set your own custom theme.
4. Opacity Slider, Simpler Visibility Options
Stacking two denoised images on top of one another, and the resulting animation.
An opacity slider has been added to the layer options menu. When used, this will make the image or video partially transparent in the UI. In addition, if the image is in the visible input layer, it will be made transparent when merged there, as well.
To make it more clear what images are and are not visible to Stable Diffusion, the "Denoising" image role has been replaced with a "Visibility" dropdown. This has three options:
To help illustrate these options and how inpainting/outpainting work, consider the following examples.
5. Generic Model Downloader
The Download Model UI.
To help bridge the gap when it comes to external service integrations, there is now a generic "Download Models" menu in Enfugue. This will allow you to enter a URL to a model hosted anywhere on the internet, and have Enfugue download it to the right location for that model type.
6. Model Metadata Viewer
The metadata viewer showing a result from CivitAI.
When using any field that allows selecting from different AI models, there is now a magnifying glass icon. When clicked, this will present you with a window containing the CivitAI metadata for that model.
This does not require the metadata be saved prior to viewing. If the model does not exist in CivitAI's database, no metadata will be available.
7. More Scheduler Configuration
The more scheduler configuration UI.
Next to the scheduler selector is a small gear icon. When clicked, this will present you with a window allowing for advanced scheduler configuration.
These values should not need to be tweaked in general. However, some new animation modules are trained using different values for these configurations, so they have been exposed to allow using these models effectively in Enfugue.
Full Changelog: 0.3.0...0.3.1
How-To Guide
If you're on Linux, it's recommended to use the new automated installer. See the top of this document for those instructions. For Windows users or anyone not using the automated installer, read below.
First decide how you'd like to install, either a portable distribution, or through conda.
Installing and Running: Portable Distributions
Summary
enfugue-server-0.3.1-win-cuda-x86_64.zip.002
enfugue-server-0.3.1-manylinux-cuda-x86_64.tar.gz.1
enfugue-server-0.3.1-manylinux-cuda-x86_64.tar.gz.2
Linux
Download the three files above that make up the entire archive, then extract them. To extract these files, you must concatenate them. Rather than taking up space in your file system, you can simply stream them together to
tar
. A console command to do that is:You are now ready to run the server with:
Press
Ctrl+C
to exit.Windows
Download the
win64
files here, and extract them using a program which allows extracting from multiple archives such as 7-Zip.If you are using 7-Zip, you should not extract both files independently. If they are in the same directory when you unzip the first, 7-Zip will automatically unzip the second. The second file cannot be extracted on its own.
Locate the file
enfugue-server.exe
, and double-click it to run it. To exit, locate the icon in the bottom-right hand corner of your screen (the system tray) and right-click it, then selectQuit
.Installing and Running: Conda
To install with the provided Conda environments, you need to install a version of Conda.
After installing Conda and configuring it so it is available to your shell or command-line, download one of the environment files depending on your platform and graphics API.
windows-
,linux-
ormacos-
based on your platform.cuda
.rocm
anddirectml
) are being added and will be made available as they are developed. Please voice your desire for these to prioritize their development.Finally, using the file you downloaded, create your Conda environment:
You've now installed Enfugue and all dependencies. To run it, activate the environment and then run the installed binary.
Optional: DWPose Support
To install DW Pose support (a better, faster pose and face detection model), after installing Enfugue, execute the following (MacOS, Linux or Windows):
Optional: GPU-Accelerated Interpolation
To install dependencies for GPU-accelerated frame interpolation, execute the following command (Linux, Windows):
Installing and Running: Self-Managed Environment
If you would like to manage dependencies yourself, or want to install Enfugue into an environment to share with another Stable Diffusion UI, you can install enfugue via
pip
. This is the only method available for AMD GPU's at present.If you are on Linux and want TensorRT support, execute:
If you are on Windows and want TensorRT support, follow the steps detailed here.
Thank you!
This discussion was created from the release ENFUGUE Web UI v0.3.1.
Beta Was this translation helpful? Give feedback.
All reactions