Skip to content

[bug]: Unable to Load FLUX Models in InvokeAI #7737

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
1 task done
greyrabbit2003 opened this issue Mar 5, 2025 · 1 comment
Open
1 task done

[bug]: Unable to Load FLUX Models in InvokeAI #7737

greyrabbit2003 opened this issue Mar 5, 2025 · 1 comment
Labels
bug Something isn't working

Comments

@greyrabbit2003
Copy link

Is there an existing issue for this problem?

  • I have searched the existing issues

Operating system

Linux

GPU vendor

Nvidia (CUDA)

GPU model

RTX 3060

GPU VRAM

12

Version number

v5.7.1

Browser

Firefox

Python dependencies

I installed InvokeAI through Pinokio Browser, and while SD and SDXL models work fine, FLUX models fail to load. The error suggests an issue with model loading, specifically referencing stable-diffusion/v1-inference.yaml.

Steps to Reproduce:

  1. Installed InvokeAI via Pinokio Browser.
    
  2. Loaded SD and SDXL models successfully.
    
  3. Tried loading FLLLLLLUX models, but they fail.
    
  4. Error appears in logs (attached below).
    

`<<PINOKIO_SHELL>>eval "$(conda shell.bash hook)" ; conda deactivate ; conda deactivate ; conda deactivate ; conda activate base ; source /home/ubuntu/pinokio/api/invoke.git/app/env/bin/activate /home/ubuntu/pinokio/api/invoke.git/app/env ; invokeai-web

patchmatch.patch_match: INFO - Compiling and loading c extensions from "/home/ubuntu/pinokio/api/invoke.git/app/env/lib/python3.10/site-packages/patchmatch".
patchmatch.patch_match: ERROR - patchmatch failed to load or compile (Command 'make clean && make' returned non-zero exit status 2.).
patchmatch.patch_match: INFO - Refer to https://invoke-ai.github.io/InvokeAI/installation/060_INSTALL_PATCHMATCH/ for installation instructions.
[2025-03-05 20:07:02,024]::[InvokeAI]::INFO --> Patchmatch not loaded (nonfatal)
[2025-03-05 20:07:02,696]::[InvokeAI]::INFO --> Using torch device: NVIDIA GeForce RTX 3060
[2025-03-05 20:07:03,013]::[InvokeAI]::INFO --> cuDNN version: 90100
[2025-03-05 20:07:03,022]::[InvokeAI]::INFO --> InvokeAI version 5.7.1
[2025-03-05 20:07:03,022]::[InvokeAI]::INFO --> Root directory = /home/ubuntu/pinokio/api/invoke.git/app
[2025-03-05 20:07:03,023]::[InvokeAI]::INFO --> Initializing database at /home/ubuntu/pinokio/drive/drives/peers/d1741151917640/databases/invokeai.db
[2025-03-05 20:07:03,125]::[ModelManagerService]::INFO --> [MODEL CACHE] Calculated model RAM cache size: 8849.00 MB. Heuristics applied: [1, 2].
[2025-03-05 20:07:03,144]::[ModelInstallService]::WARNING --> Missing model file: Shakker-Labs-FLUX.1-dev-ControlNet-Union-Pro at /home/ubuntu/pinokio/drive/drives/peers/d1739623879987/controlnet/Shakker-Labs-FLUX.1-dev-ControlNet-Union-Pro.safetensors
[2025-03-05 20:07:03,144]::[ModelInstallService]::WARNING --> Missing model file: hunyuan_720_vae_t2v_fp8 at /home/ubuntu/pinokio/drive/drives/peers/d1739623879987/vae/hunyuan_720_vae_t2v_fp8.pt
[2025-03-05 20:07:03,144]::[ModelInstallService]::WARNING --> Missing model file: hunyuan_video_vae_bf16 at /home/ubuntu/pinokio/drive/drives/peers/d1739623879987/vae/hunyuan_video_vae_bf16.safetensors
[2025-03-05 20:07:03,144]::[ModelInstallService]::WARNING --> Missing model file: vae-THUDM-CogVideoX-2b at /home/ubuntu/pinokio/drive/drives/peers/d1739623879987/vae/vae-THUDM-CogVideoX-2b.safetensors
[2025-03-05 20:07:03,144]::[ModelInstallService]::WARNING --> Missing model file: vae_Lightricks-LTX-Video at /home/ubuntu/pinokio/drive/drives/peers/d1739623879987/vae/vae_Lightricks-LTX-Video.safetensors
[2025-03-05 20:07:03,282]::[InvokeAI]::INFO --> Invoke running on http://127.0.0.1:9090 (Press CTRL+C to quit)
[2025-03-05 20:07:48,878]::[InvokeAI]::INFO --> Deleted model: 8bb5e3d3-cb56-4d36-8262-bddff6f6fba6
[2025-03-05 20:07:51,749]::[InvokeAI]::INFO --> Deleted model: dfc76c0f-ba2c-4f10-bd3d-4052e8066e91
[2025-03-05 20:07:54,482]::[InvokeAI]::INFO --> Deleted model: 109ac880-3fed-4587-a3f8-6b0c9a1fbf79
[2025-03-05 20:08:05,773]::[InvokeAI]::INFO --> Deleted model: 67ef1e87-2e2d-4ba2-b993-b509c8e225f1
[2025-03-05 20:08:15,377]::[InvokeAI]::INFO --> Updated model: 3376cdc0-e63e-401e-9cf0-696a857589d4
[2025-03-05 20:08:27,199]::[InvokeAI]::INFO --> Updated model: 4485a72c-9c71-4af0-8156-8842cc84408a
[2025-03-05 20:08:54,745]::[InvokeAI]::INFO --> Deleted model: 6be625a1-039f-4818-9d71-a120d00aee49
[2025-03-05 20:09:16,417]::[InvokeAI]::INFO --> Started installation of /home/ubuntu/pinokio/drive/drives/peers/d1739623879987/controlnet/FLUX/Shakker-Labs-FLUX.1-dev-ControlNet-Union-Pro.safetensors
[2025-03-05 20:09:16,417]::[ModelInstallService]::INFO --> Model install started: /home/ubuntu/pinokio/drive/drives/peers/d1739623879987/controlnet/FLUX/Shakker-Labs-FLUX.1-dev-ControlNet-Union-Pro.safetensors
Hashing Shakker-Labs-FLUX.1-dev-ControlNet-Union-Pro.safetensors: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:06<00:00, 6.56s/file]
[2025-03-05 20:09:23,251]::[ModelInstallService]::INFO --> Model install complete: /home/ubuntu/pinokio/drive/drives/peers/d1739623879987/controlnet/FLUX/Shakker-Labs-FLUX.1-dev-ControlNet-Union-Pro.safetensors
[2025-03-05 20:09:58,381]::[InvokeAI]::INFO --> Executing queue item 225, session c1746ee2-9901-4cf0-9786-946e5fa50d09
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.97it/s]
[2025-03-05 20:10:14,123]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '57198c6b-794c-42ce-aa90-bf32f081ff17:text_encoder_2' (T5EncoderModel) onto cuda device in 14.77s. Total model size: 9083.39MB, VRAM: 9083.39MB (100.0%)
[2025-03-05 20:10:14,235]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '57198c6b-794c-42ce-aa90-bf32f081ff17:tokenizer_2' (T5Tokenizer) onto cuda device in 0.00s. Total model size: 0.03MB, VRAM: 0.00MB (0.0%)
[2025-03-05 20:10:15,282]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'c1b60061-0709-4367-ba7a-03a88ec2f770:text_encoder' (CLIPTextModel) onto cuda device in 0.07s. Total model size: 469.44MB, VRAM: 469.44MB (100.0%)
[2025-03-05 20:10:15,341]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'c1b60061-0709-4367-ba7a-03a88ec2f770:tokenizer' (CLIPTokenizer) onto cuda device in 0.00s. Total model size: 0.00MB, VRAM: 0.00MB (0.0%)
[2025-03-05 20:10:16,479]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '600dcc16-38ba-4a3b-87b7-7054c6aac570:transformer' (Flux) onto cuda device in 0.95s. Total model size: 5679.45MB, VRAM: 5679.45MB (100.0%)
33%|█████████████████████████████████████████████████████████████▎ | 10/30 [00:39<01:19, 3.99s/it]
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 30/30 [01:59<00:00, 3.99s/it]
[2025-03-05 20:12:16,556]::[InvokeAI]::ERROR --> Error while invoking session c1746ee2-9901-4cf0-9786-946e5fa50d09, invocation d811d337-44c7-4afd-8fea-84a3027df098 (flux_vae_decode): 'stable-diffusion/v1-inference.yaml'
[2025-03-05 20:12:16,556]::[InvokeAI]::ERROR --> Traceback (most recent call last):
File "/home/ubuntu/pinokio/api/invoke.git/app/env/lib/python3.10/site-packages/invokeai/app/services/session_processor/session_processor_default.py", line 129, in run_node
output = invocation.invoke_internal(context=context, services=self._services)
File "/home/ubuntu/pinokio/api/invoke.git/app/env/lib/python3.10/site-packages/invokeai/app/invocations/baseinvocation.py", line 303, in invoke_internal
output = self.invoke(context)
File "/home/ubuntu/pinokio/api/invoke.git/app/env/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/ubuntu/pinokio/api/invoke.git/app/env/lib/python3.10/site-packages/invokeai/app/invocations/flux_vae_decode.py", line 72, in invoke
vae_info = context.models.load(self.vae.vae)
File "/home/ubuntu/pinokio/api/invoke.git/app/env/lib/python3.10/site-packages/invokeai/app/services/shared/invocation_context.py", line 397, in load
return self._services.model_manager.load.load_model(model, submodel_type)
File "/home/ubuntu/pinokio/api/invoke.git/app/env/lib/python3.10/site-packages/invokeai/app/services/model_load/model_load_default.py", line 70, in load_model
).load_model(model_config, submodel_type)
File "/home/ubuntu/pinokio/api/invoke.git/app/env/lib/python3.10/site-packages/invokeai/backend/model_manager/load/load_default.py", line 58, in load_model
cache_record = self._load_and_cache(model_config, submodel_type)
File "/home/ubuntu/pinokio/api/invoke.git/app/env/lib/python3.10/site-packages/invokeai/backend/model_manager/load/load_default.py", line 79, in _load_and_cache
loaded_model = self._load_model(config, submodel_type)
File "/home/ubuntu/pinokio/api/invoke.git/app/env/lib/python3.10/site-packages/invokeai/backend/model_manager/load/model_loaders/flux.py", line 84, in _load_model
model = AutoEncoder(ae_params[config.config_path])
KeyError: 'stable-diffusion/v1-inference.yaml'

[2025-03-05 20:12:16,606]::[InvokeAI]::INFO --> Graph stats: c1746ee2-9901-4cf0-9786-946e5fa50d09
Node Calls Seconds VRAM Used
flux_model_loader 1 0.013s 0.000G
lora_selector 1 0.002s 0.000G
collect 2 0.002s 0.472G
flux_lora_collection_loader 1 0.001s 0.000G
flux_text_encoder 1 16.994s 9.118G
flux_denoise 1 121.123s 7.427G
flux_vae_decode 1 0.022s 6.844G
TOTAL GRAPH EXECUTION TIME: 138.157s
TOTAL GRAPH WALL TIME: 138.161s
RAM used by InvokeAI process: 8.31G (+7.398G)
RAM used to load models: 14.89G
VRAM in use: 6.844G
RAM cache statistics:
Model cache hits: 8
Model cache misses: 7
Models cached: 5
Models cleared from cache: 1
Cache high water mark: 8.89/0.00G

[2025-03-05 20:12:27,772]::[InvokeAI]::INFO --> Executing queue item 226, session f4820122-acc7-4e19-94ee-a395020de121
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 5.68it/s]
[2025-03-05 20:12:30,154]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '57198c6b-794c-42ce-aa90-bf32f081ff17:text_encoder_2' (T5EncoderModel) onto cuda device in 1.22s. Total model size: 9083.39MB, VRAM: 9083.39MB (100.0%)
[2025-03-05 20:12:30,251]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '57198c6b-794c-42ce-aa90-bf32f081ff17:tokenizer_2' (T5Tokenizer) onto cuda device in 0.00s. Total model size: 0.03MB, VRAM: 0.00MB (0.0%)
[2025-03-05 20:12:33,006]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'c1b60061-0709-4367-ba7a-03a88ec2f770:text_encoder' (CLIPTextModel) onto cuda device in 0.09s. Total model size: 469.44MB, VRAM: 469.44MB (100.0%)
[2025-03-05 20:12:33,059]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'c1b60061-0709-4367-ba7a-03a88ec2f770:tokenizer' (CLIPTokenizer) onto cuda device in 0.00s. Total model size: 0.00MB, VRAM: 0.00MB (0.0%)
[2025-03-05 20:12:34,459]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '600dcc16-38ba-4a3b-87b7-7054c6aac570:transformer' (Flux) onto cuda device in 1.20s. Total model size: 5679.45MB, VRAM: 5679.45MB (100.0%)
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 30/30 [02:00<00:00, 4.01s/it]
[2025-03-05 20:14:35,039]::[InvokeAI]::ERROR --> Error while invoking session f4820122-acc7-4e19-94ee-a395020de121, invocation 14db28d5-75ce-40fa-808c-72d58a63be19 (flux_vae_decode): 'stable-diffusion/v1-inference.yaml'
[2025-03-05 20:14:35,039]::[InvokeAI]::ERROR --> Traceback (most recent call last):
File "/home/ubuntu/pinokio/api/invoke.git/app/env/lib/python3.10/site-packages/invokeai/app/services/session_processor/session_processor_default.py", line 129, in run_node
output = invocation.invoke_internal(context=context, services=self._services)
File "/home/ubuntu/pinokio/api/invoke.git/app/env/lib/python3.10/site-packages/invokeai/app/invocations/baseinvocation.py", line 303, in invoke_internal
output = self.invoke(context)
File "/home/ubuntu/pinokio/api/invoke.git/app/env/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/ubuntu/pinokio/api/invoke.git/app/env/lib/python3.10/site-packages/invokeai/app/invocations/flux_vae_decode.py", line 72, in invoke
vae_info = context.models.load(self.vae.vae)
File "/home/ubuntu/pinokio/api/invoke.git/app/env/lib/python3.10/site-packages/invokeai/app/services/shared/invocation_context.py", line 397, in load
return self._services.model_manager.load.load_model(model, submodel_type)
File "/home/ubuntu/pinokio/api/invoke.git/app/env/lib/python3.10/site-packages/invokeai/app/services/model_load/model_load_default.py", line 70, in load_model
).load_model(model_config, submodel_type)
File "/home/ubuntu/pinokio/api/invoke.git/app/env/lib/python3.10/site-packages/invokeai/backend/model_manager/load/load_default.py", line 58, in load_model
cache_record = self._load_and_cache(model_config, submodel_type)
File "/home/ubuntu/pinokio/api/invoke.git/app/env/lib/python3.10/site-packages/invokeai/backend/model_manager/load/load_default.py", line 79, in _load_and_cache
loaded_model = self._load_model(config, submodel_type)
File "/home/ubuntu/pinokio/api/invoke.git/app/env/lib/python3.10/site-packages/invokeai/backend/model_manager/load/model_loaders/flux.py", line 84, in _load_model
model = AutoEncoder(ae_params[config.config_path])
KeyError: 'stable-diffusion/v1-inference.yaml'

[2025-03-05 20:14:35,093]::[InvokeAI]::INFO --> Graph stats: f4820122-acc7-4e19-94ee-a395020de121
Node Calls Seconds VRAM Used
flux_model_loader 1 0.001s 6.844G
lora_selector 1 0.000s 6.844G
collect 2 0.001s 6.844G
flux_lora_collection_loader 1 0.002s 6.844G
flux_text_encoder 1 5.305s 9.129G
flux_denoise 1 121.917s 7.432G
flux_vae_decode 1 0.025s 6.850G
TOTAL GRAPH EXECUTION TIME: 127.251s
TOTAL GRAPH WALL TIME: 127.259s
RAM used by InvokeAI process: 8.46G (+0.148G)
RAM used to load models: 14.89G
VRAM in use: 6.850G
RAM cache statistics:
Model cache hits: 8
Model cache misses: 7
Models cached: 5
Models cleared from cache: 1
Cache high water mark: 8.89/0.00G
`

Image

Image

What happened

he model fails to load with an error mentioning stable-diffusion/v1-inference.yaml.

What you expected to happen

FLUX models should load successfully like SD and SDXL models.

How to reproduce the problem

No response

Additional context

No response

Discord username

No response

@greyrabbit2003 greyrabbit2003 added the bug Something isn't working label Mar 5, 2025
@psyins
Copy link

psyins commented Mar 13, 2025

I have the exact same problem. It was working fine until yesterday, but suddenly this error occurred.
I don't know where the problem is coming from. Does anyone have a solution?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants