You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I installed InvokeAI through Pinokio Browser, and while SD and SDXL models work fine, FLUX models fail to load. The error suggests an issue with model loading, specifically referencing stable-diffusion/v1-inference.yaml.
I have the exact same problem. It was working fine until yesterday, but suddenly this error occurred.
I don't know where the problem is coming from. Does anyone have a solution?
Is there an existing issue for this problem?
Operating system
Linux
GPU vendor
Nvidia (CUDA)
GPU model
RTX 3060
GPU VRAM
12
Version number
v5.7.1
Browser
Firefox
Python dependencies
I installed InvokeAI through Pinokio Browser, and while SD and SDXL models work fine, FLUX models fail to load. The error suggests an issue with model loading, specifically referencing stable-diffusion/v1-inference.yaml.
Steps to Reproduce:
`<<PINOKIO_SHELL>>eval "$(conda shell.bash hook)" ; conda deactivate ; conda deactivate ; conda deactivate ; conda activate base ; source /home/ubuntu/pinokio/api/invoke.git/app/env/bin/activate /home/ubuntu/pinokio/api/invoke.git/app/env ; invokeai-web
[2025-03-05 20:12:16,606]::[InvokeAI]::INFO --> Graph stats: c1746ee2-9901-4cf0-9786-946e5fa50d09
Node Calls Seconds VRAM Used
flux_model_loader 1 0.013s 0.000G
lora_selector 1 0.002s 0.000G
collect 2 0.002s 0.472G
flux_lora_collection_loader 1 0.001s 0.000G
flux_text_encoder 1 16.994s 9.118G
flux_denoise 1 121.123s 7.427G
flux_vae_decode 1 0.022s 6.844G
TOTAL GRAPH EXECUTION TIME: 138.157s
TOTAL GRAPH WALL TIME: 138.161s
RAM used by InvokeAI process: 8.31G (+7.398G)
RAM used to load models: 14.89G
VRAM in use: 6.844G
RAM cache statistics:
Model cache hits: 8
Model cache misses: 7
Models cached: 5
Models cleared from cache: 1
Cache high water mark: 8.89/0.00G
[2025-03-05 20:12:27,772]::[InvokeAI]::INFO --> Executing queue item 226, session f4820122-acc7-4e19-94ee-a395020de121
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 5.68it/s]
[2025-03-05 20:12:30,154]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '57198c6b-794c-42ce-aa90-bf32f081ff17:text_encoder_2' (T5EncoderModel) onto cuda device in 1.22s. Total model size: 9083.39MB, VRAM: 9083.39MB (100.0%)
[2025-03-05 20:12:30,251]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '57198c6b-794c-42ce-aa90-bf32f081ff17:tokenizer_2' (T5Tokenizer) onto cuda device in 0.00s. Total model size: 0.03MB, VRAM: 0.00MB (0.0%)
[2025-03-05 20:12:33,006]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'c1b60061-0709-4367-ba7a-03a88ec2f770:text_encoder' (CLIPTextModel) onto cuda device in 0.09s. Total model size: 469.44MB, VRAM: 469.44MB (100.0%)
[2025-03-05 20:12:33,059]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'c1b60061-0709-4367-ba7a-03a88ec2f770:tokenizer' (CLIPTokenizer) onto cuda device in 0.00s. Total model size: 0.00MB, VRAM: 0.00MB (0.0%)
[2025-03-05 20:12:34,459]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '600dcc16-38ba-4a3b-87b7-7054c6aac570:transformer' (Flux) onto cuda device in 1.20s. Total model size: 5679.45MB, VRAM: 5679.45MB (100.0%)
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 30/30 [02:00<00:00, 4.01s/it]
[2025-03-05 20:14:35,039]::[InvokeAI]::ERROR --> Error while invoking session f4820122-acc7-4e19-94ee-a395020de121, invocation 14db28d5-75ce-40fa-808c-72d58a63be19 (flux_vae_decode): 'stable-diffusion/v1-inference.yaml'
[2025-03-05 20:14:35,039]::[InvokeAI]::ERROR --> Traceback (most recent call last):
File "/home/ubuntu/pinokio/api/invoke.git/app/env/lib/python3.10/site-packages/invokeai/app/services/session_processor/session_processor_default.py", line 129, in run_node
output = invocation.invoke_internal(context=context, services=self._services)
File "/home/ubuntu/pinokio/api/invoke.git/app/env/lib/python3.10/site-packages/invokeai/app/invocations/baseinvocation.py", line 303, in invoke_internal
output = self.invoke(context)
File "/home/ubuntu/pinokio/api/invoke.git/app/env/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/ubuntu/pinokio/api/invoke.git/app/env/lib/python3.10/site-packages/invokeai/app/invocations/flux_vae_decode.py", line 72, in invoke
vae_info = context.models.load(self.vae.vae)
File "/home/ubuntu/pinokio/api/invoke.git/app/env/lib/python3.10/site-packages/invokeai/app/services/shared/invocation_context.py", line 397, in load
return self._services.model_manager.load.load_model(model, submodel_type)
File "/home/ubuntu/pinokio/api/invoke.git/app/env/lib/python3.10/site-packages/invokeai/app/services/model_load/model_load_default.py", line 70, in load_model
).load_model(model_config, submodel_type)
File "/home/ubuntu/pinokio/api/invoke.git/app/env/lib/python3.10/site-packages/invokeai/backend/model_manager/load/load_default.py", line 58, in load_model
cache_record = self._load_and_cache(model_config, submodel_type)
File "/home/ubuntu/pinokio/api/invoke.git/app/env/lib/python3.10/site-packages/invokeai/backend/model_manager/load/load_default.py", line 79, in _load_and_cache
loaded_model = self._load_model(config, submodel_type)
File "/home/ubuntu/pinokio/api/invoke.git/app/env/lib/python3.10/site-packages/invokeai/backend/model_manager/load/model_loaders/flux.py", line 84, in _load_model
model = AutoEncoder(ae_params[config.config_path])
KeyError: 'stable-diffusion/v1-inference.yaml'
[2025-03-05 20:14:35,093]::[InvokeAI]::INFO --> Graph stats: f4820122-acc7-4e19-94ee-a395020de121
Node Calls Seconds VRAM Used
flux_model_loader 1 0.001s 6.844G
lora_selector 1 0.000s 6.844G
collect 2 0.001s 6.844G
flux_lora_collection_loader 1 0.002s 6.844G
flux_text_encoder 1 5.305s 9.129G
flux_denoise 1 121.917s 7.432G
flux_vae_decode 1 0.025s 6.850G
TOTAL GRAPH EXECUTION TIME: 127.251s
TOTAL GRAPH WALL TIME: 127.259s
RAM used by InvokeAI process: 8.46G (+0.148G)
RAM used to load models: 14.89G
VRAM in use: 6.850G
RAM cache statistics:
Model cache hits: 8
Model cache misses: 7
Models cached: 5
Models cleared from cache: 1
Cache high water mark: 8.89/0.00G
`
What happened
he model fails to load with an error mentioning stable-diffusion/v1-inference.yaml.
What you expected to happen
FLUX models should load successfully like SD and SDXL models.
How to reproduce the problem
No response
Additional context
No response
Discord username
No response
The text was updated successfully, but these errors were encountered: