Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flux应用IPAdapter mat1 and mat2 shapes cannot be multiplied (1x1024 and 768x16384) #153

Open
stromyu520 opened this issue Nov 19, 2024 · 11 comments

Comments

@stromyu520
Copy link

ip_adapter_workflow.json

ComfyUI Error Report

Error Details

  • Node Type: ApplyFluxIPAdapter
  • Exception Type: RuntimeError
  • Exception Message: mat1 and mat2 shapes cannot be multiplied (1x1024 and 768x16384)

Stack Trace

  File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)

  File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\x-flux-comfyui\nodes.py", line 615, in applymodel
    ip_projes = ip_adapter_flux['ip_adapter_proj_model'](out.to(ip_projes_dev, dtype=torch.bfloat16)).to(device, dtype=torch.bfloat16)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\x-flux-comfyui\layers.py", line 291, in forward
    clip_extra_context_tokens = self.proj(embeds).reshape(
                                ^^^^^^^^^^^^^^^^^

  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\linear.py", line 125, in forward
    return F.linear(input, self.weight, self.bias)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

System Information

  • ComfyUI Version: v0.2.7-4-g5e29e7a
  • Arguments: ComfyUI\main.py --windows-standalone-build
  • OS: nt
  • Python Version: 3.12.7 (tags/v3.12.7:0b05ead, Oct 1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)]
  • Embedded Python: true
  • PyTorch Version: 2.5.1+cu124

Devices

  • Name: cuda:0 NVIDIA GeForce RTX 4060 Laptop GPU : cudaMallocAsync
    • Type: cuda
    • VRAM Total: 8585216000
    • VRAM Free: 2371184120
    • Torch VRAM Total: 5100273664
    • Torch VRAM Free: 82142712

Logs

2024-11-19 16:52:40,656 - root - INFO - Total VRAM 8188 MB, total RAM 32469 MB
2024-11-19 16:52:40,657 - root - INFO - pytorch version: 2.5.1+cu124
2024-11-19 16:52:40,659 - root - INFO - Set vram state to: NORMAL_VRAM
2024-11-19 16:52:40,659 - root - INFO - Device: cuda:0 NVIDIA GeForce RTX 4060 Laptop GPU : cudaMallocAsync
2024-11-19 16:52:46,809 - root - INFO - Using pytorch cross attention
2024-11-19 16:53:01,210 - root - INFO - [Prompt Server] web root: D:\AI\ComfyUI_windows_portable\ComfyUI\web
2024-11-19 16:53:01,260 - root - INFO - Adding extra search path checkpoints E:\dev\pythonProject\stable-diffusion-webui\models/Stable-diffusion
2024-11-19 16:53:01,260 - root - INFO - Adding extra search path configs E:\dev\pythonProject\stable-diffusion-webui\models/Stable-diffusion
2024-11-19 16:53:01,260 - root - INFO - Adding extra search path vae E:\dev\pythonProject\stable-diffusion-webui\models/VAE
2024-11-19 16:53:01,260 - root - INFO - Adding extra search path loras E:\dev\pythonProject\stable-diffusion-webui\models/Lora
2024-11-19 16:53:01,266 - root - INFO - Adding extra search path loras E:\dev\pythonProject\stable-diffusion-webui\models/LyCORIS
2024-11-19 16:53:01,267 - root - INFO - Adding extra search path upscale_models E:\dev\pythonProject\stable-diffusion-webui\models/ESRGAN
2024-11-19 16:53:01,268 - root - INFO - Adding extra search path upscale_models E:\dev\pythonProject\stable-diffusion-webui\models/RealESRGAN
2024-11-19 16:53:01,269 - root - INFO - Adding extra search path upscale_models E:\dev\pythonProject\stable-diffusion-webui\models/SwinIR
2024-11-19 16:53:01,270 - root - INFO - Adding extra search path embeddings E:\dev\pythonProject\stable-diffusion-webui\embeddings
2024-11-19 16:53:01,272 - root - INFO - Adding extra search path hypernetworks E:\dev\pythonProject\stable-diffusion-webui\models/hypernetworks
2024-11-19 16:53:01,274 - root - INFO - Adding extra search path controlnet E:\dev\pythonProject\stable-diffusion-webui\models/ControlNet
2024-11-19 16:53:13,666 - root - INFO - 
Import times for custom nodes:
2024-11-19 16:53:13,670 - root - INFO -    0.0 seconds: D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py
2024-11-19 16:53:13,671 - root - INFO -    0.0 seconds: D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\AIGODLIKE-COMFYUI-TRANSLATION
2024-11-19 16:53:13,672 - root - INFO -    0.1 seconds: D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Custom-Scripts
2024-11-19 16:53:13,673 - root - INFO -    0.4 seconds: D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus
2024-11-19 16:53:13,674 - root - INFO -    0.9 seconds: D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\x-flux-comfyui
2024-11-19 16:53:13,675 - root - INFO -    1.0 seconds: D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Crystools
2024-11-19 16:53:13,677 - root - INFO -    1.5 seconds: D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager
2024-11-19 16:53:13,678 - root - INFO -    4.6 seconds: D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux
2024-11-19 16:53:13,679 - root - INFO - 
2024-11-19 16:53:13,737 - root - INFO - Starting server

2024-11-19 16:53:13,741 - root - INFO - To see the GUI go to: http://127.0.0.1:8188
2024-11-19 17:08:33,936 - root - INFO - got prompt
2024-11-19 17:08:34,165 - root - INFO - Using pytorch attention in VAE
2024-11-19 17:08:34,171 - root - INFO - Using pytorch attention in VAE
2024-11-19 17:08:40,742 - root - WARNING - clip missing: ['text_projection.weight']
2024-11-19 17:08:42,068 - root - INFO - Requested to load FluxClipModel_
2024-11-19 17:08:42,069 - root - INFO - Loading 1 new model
2024-11-19 17:08:45,134 - root - INFO - loaded completely 0.0 4777.53759765625 True
2024-11-19 17:09:11,396 - root - INFO - model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16
2024-11-19 17:09:11,398 - root - INFO - model_type FLUX
2024-11-19 17:12:29,101 - root - ERROR - !!! Exception during processing !!! mat1 and mat2 shapes cannot be multiplied (1x1024 and 768x16384)
2024-11-19 17:12:29,362 - root - ERROR - Traceback (most recent call last):
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\x-flux-comfyui\nodes.py", line 615, in applymodel
    ip_projes = ip_adapter_flux['ip_adapter_proj_model'](out.to(ip_projes_dev, dtype=torch.bfloat16)).to(device, dtype=torch.bfloat16)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\x-flux-comfyui\layers.py", line 291, in forward
    clip_extra_context_tokens = self.proj(embeds).reshape(
                                ^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\linear.py", line 125, in forward
    return F.linear(input, self.weight, self.bias)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x1024 and 768x16384)

2024-11-19 17:12:29,378 - root - INFO - Prompt executed in 235.42 seconds
2024-11-19 17:13:18,823 - root - INFO - got prompt
2024-11-19 17:13:25,146 - root - ERROR - !!! Exception during processing !!! mat1 and mat2 shapes cannot be multiplied (1x1024 and 768x16384)
2024-11-19 17:13:25,149 - root - ERROR - Traceback (most recent call last):
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\x-flux-comfyui\nodes.py", line 615, in applymodel
    ip_projes = ip_adapter_flux['ip_adapter_proj_model'](out.to(ip_projes_dev, dtype=torch.bfloat16)).to(device, dtype=torch.bfloat16)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\x-flux-comfyui\layers.py", line 291, in forward
    clip_extra_context_tokens = self.proj(embeds).reshape(
                                ^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\linear.py", line 125, in forward
    return F.linear(input, self.weight, self.bias)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x1024 and 768x16384)

2024-11-19 17:13:25,152 - root - INFO - Prompt executed in 6.27 seconds
2024-11-19 17:13:56,456 - root - INFO - got prompt
2024-11-19 17:13:56,469 - root - ERROR - Failed to validate prompt for output 36:
2024-11-19 17:13:56,470 - root - ERROR - * LoadFluxIPAdapter 32:
2024-11-19 17:13:56,470 - root - ERROR -   - Value not in list: clip_vision: 'model.safetensors' not in ['CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors']
2024-11-19 17:13:56,471 - root - ERROR - * DualCLIPLoader 4:
2024-11-19 17:13:56,471 - root - ERROR -   - Value not in list: clip_name1: 't5xxl_fp16.safetensors' not in ['clip_g.safetensors', 'clip_l.safetensors', 't5xxl_fp8_e4m3fn.safetensors']
2024-11-19 17:13:56,471 - root - ERROR - Output will be ignored
2024-11-19 17:13:56,471 - root - WARNING - invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}}
2024-11-19 17:14:07,508 - root - INFO - got prompt
2024-11-19 17:14:13,532 - root - ERROR - !!! Exception during processing !!! mat1 and mat2 shapes cannot be multiplied (1x1024 and 768x16384)
2024-11-19 17:14:13,536 - root - ERROR - Traceback (most recent call last):
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\x-flux-comfyui\nodes.py", line 615, in applymodel
    ip_projes = ip_adapter_flux['ip_adapter_proj_model'](out.to(ip_projes_dev, dtype=torch.bfloat16)).to(device, dtype=torch.bfloat16)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\x-flux-comfyui\layers.py", line 291, in forward
    clip_extra_context_tokens = self.proj(embeds).reshape(
                                ^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\linear.py", line 125, in forward
    return F.linear(input, self.weight, self.bias)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x1024 and 768x16384)

2024-11-19 17:14:13,536 - root - INFO - Prompt executed in 6.00 seconds
2024-11-19 17:27:59,119 - root - INFO - got prompt
2024-11-19 17:28:06,486 - root - INFO - model weight dtype torch.float16, manual cast: None
2024-11-19 17:28:06,498 - root - INFO - model_type EPS
2024-11-19 17:28:09,535 - root - INFO - Using pytorch attention in VAE
2024-11-19 17:28:09,541 - root - INFO - Using pytorch attention in VAE
2024-11-19 17:28:12,705 - root - INFO - Requested to load SD1ClipModel
2024-11-19 17:28:12,707 - root - INFO - Loading 1 new model
2024-11-19 17:28:13,196 - root - INFO - loaded completely 0.0 235.84423828125 True
2024-11-19 17:28:17,235 - root - INFO - Requested to load CLIPVisionModelProjection
2024-11-19 17:28:17,235 - root - INFO - Loading 1 new model
2024-11-19 17:28:18,190 - root - INFO - loaded completely 0.0 1208.09814453125 True
2024-11-19 17:28:20,793 - root - INFO - Requested to load BaseModel
2024-11-19 17:28:20,795 - root - INFO - Loading 1 new model
2024-11-19 17:28:22,227 - root - INFO - loaded completely 0.0 1639.406135559082 True
2024-11-19 17:28:34,361 - root - INFO - Requested to load AutoencoderKL
2024-11-19 17:28:34,364 - root - INFO - Loading 1 new model
2024-11-19 17:28:34,661 - root - INFO - loaded completely 0.0 159.55708122253418 True
2024-11-19 17:28:35,957 - root - INFO - Prompt executed in 36.81 seconds
2024-11-19 17:29:50,315 - root - INFO - got prompt
2024-11-19 17:29:54,943 - root - INFO - Using pytorch attention in VAE
2024-11-19 17:29:54,943 - root - INFO - Using pytorch attention in VAE
2024-11-19 17:30:00,935 - root - WARNING - clip missing: ['text_projection.weight']
2024-11-19 17:30:02,082 - root - INFO - Requested to load FluxClipModel_
2024-11-19 17:30:02,083 - root - INFO - Loading 1 new model
2024-11-19 17:30:05,503 - root - INFO - loaded completely 0.0 4777.53759765625 True
2024-11-19 17:30:20,536 - root - INFO - model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16
2024-11-19 17:30:20,537 - root - INFO - model_type FLUX
2024-11-19 17:34:08,288 - root - ERROR - !!! Exception during processing !!! mat1 and mat2 shapes cannot be multiplied (1x1280 and 768x16384)
2024-11-19 17:34:08,315 - root - ERROR - Traceback (most recent call last):
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\x-flux-comfyui\nodes.py", line 615, in applymodel
    ip_projes = ip_adapter_flux['ip_adapter_proj_model'](out.to(ip_projes_dev, dtype=torch.bfloat16)).to(device, dtype=torch.bfloat16)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\x-flux-comfyui\layers.py", line 291, in forward
    clip_extra_context_tokens = self.proj(embeds).reshape(
                                ^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\linear.py", line 125, in forward
    return F.linear(input, self.weight, self.bias)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x1280 and 768x16384)

2024-11-19 17:34:08,341 - root - INFO - Prompt executed in 258.01 seconds
2024-11-19 17:36:23,430 - root - INFO - got prompt
2024-11-19 17:36:44,745 - root - ERROR - !!! Exception during processing !!! mat1 and mat2 shapes cannot be multiplied (1x1024 and 768x16384)
2024-11-19 17:36:44,745 - root - ERROR - Traceback (most recent call last):
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\x-flux-comfyui\nodes.py", line 615, in applymodel
    ip_projes = ip_adapter_flux['ip_adapter_proj_model'](out.to(ip_projes_dev, dtype=torch.bfloat16)).to(device, dtype=torch.bfloat16)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\x-flux-comfyui\layers.py", line 291, in forward
    clip_extra_context_tokens = self.proj(embeds).reshape(
                                ^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\linear.py", line 125, in forward
    return F.linear(input, self.weight, self.bias)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x1024 and 768x16384)

2024-11-19 17:36:44,753 - root - INFO - Prompt executed in 21.28 seconds
2024-11-19 17:37:21,280 - root - INFO - got prompt
2024-11-19 17:37:40,581 - root - ERROR - !!! Exception during processing !!! mat1 and mat2 shapes cannot be multiplied (1x1024 and 768x65536)
2024-11-19 17:37:40,594 - root - ERROR - Traceback (most recent call last):
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\x-flux-comfyui\nodes.py", line 615, in applymodel
    ip_projes = ip_adapter_flux['ip_adapter_proj_model'](out.to(ip_projes_dev, dtype=torch.bfloat16)).to(device, dtype=torch.bfloat16)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\x-flux-comfyui\layers.py", line 291, in forward
    clip_extra_context_tokens = self.proj(embeds).reshape(
                                ^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\linear.py", line 125, in forward
    return F.linear(input, self.weight, self.bias)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x1024 and 768x65536)

2024-11-19 17:37:40,599 - root - INFO - Prompt executed in 19.29 seconds
2024-11-19 17:37:55,525 - root - INFO - got prompt
2024-11-19 17:38:02,704 - root - ERROR - !!! Exception during processing !!! mat1 and mat2 shapes cannot be multiplied (1x1024 and 768x65536)
2024-11-19 17:38:02,705 - root - ERROR - Traceback (most recent call last):
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\x-flux-comfyui\nodes.py", line 615, in applymodel
    ip_projes = ip_adapter_flux['ip_adapter_proj_model'](out.to(ip_projes_dev, dtype=torch.bfloat16)).to(device, dtype=torch.bfloat16)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\x-flux-comfyui\layers.py", line 291, in forward
    clip_extra_context_tokens = self.proj(embeds).reshape(
                                ^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\linear.py", line 125, in forward
    return F.linear(input, self.weight, self.bias)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x1024 and 768x65536)

2024-11-19 17:38:02,708 - root - INFO - Prompt executed in 7.14 seconds
2024-11-19 17:41:11,853 - root - INFO - got prompt
2024-11-19 17:41:29,595 - root - ERROR - !!! Exception during processing !!! mat1 and mat2 shapes cannot be multiplied (1x1024 and 768x16384)
2024-11-19 17:41:29,595 - root - ERROR - Traceback (most recent call last):
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\x-flux-comfyui\nodes.py", line 615, in applymodel
    ip_projes = ip_adapter_flux['ip_adapter_proj_model'](out.to(ip_projes_dev, dtype=torch.bfloat16)).to(device, dtype=torch.bfloat16)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\x-flux-comfyui\layers.py", line 291, in forward
    clip_extra_context_tokens = self.proj(embeds).reshape(
                                ^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\linear.py", line 125, in forward
    return F.linear(input, self.weight, self.bias)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x1024 and 768x16384)

2024-11-19 17:41:29,601 - root - INFO - Prompt executed in 17.72 seconds

Attached Workflow

Please make sure that workflow does not contain any sensitive information such as API keys or passwords.

{"last_node_id":36,"last_link_id":76,"nodes":[{"id":6,"type":"EmptyLatentImage","pos":{"0":553,"1":475},"size":{"0":315,"1":106},"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"LATENT","type":"LATENT","links":[75],"slot_index":0,"shape":3,"label":"Latent"}],"properties":{"Node name for S&R":"EmptyLatentImage"},"widgets_values":[1024,1024,1]},{"id":19,"type":"CLIPTextEncodeFlux","pos":{"0":142,"1":288},"size":{"0":400,"1":200},"flags":{},"order":10,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":27,"slot_index":0,"label":"CLIP"}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[26],"slot_index":0,"shape":3,"label":"条件"}],"properties":{"Node name for S&R":"CLIPTextEncodeFlux"},"widgets_values":["","",4]},{"id":8,"type":"VAELoader","pos":{"0":1048,"1":347},"size":{"0":315,"1":58},"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"VAE","type":"VAE","links":[59],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"VAELoader"},"widgets_values":["ae.safetensors"]},{"id":35,"type":"FluxLoraLoader","pos":{"0":1020,"1":-158},"size":{"0":315,"1":82},"flags":{},"order":2,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":null,"label":"模型"}],"outputs":[{"name":"MODEL","type":"MODEL","links":null,"shape":3,"label":"模型"}],"properties":{"Node name for S&R":"FluxLoraLoader"},"widgets_values":["anime_lora.safetensors",1]},{"id":10,"type":"UNETLoader","pos":{"0":149,"1":589},"size":{"0":315,"1":82},"flags":{},"order":3,"mode":0,"inputs":[],"outputs":[{"name":"MODEL","type":"MODEL","links":[61],"slot_index":0,"shape":3,"label":"模型"}],"properties":{"Node name for S&R":"UNETLoader"},"widgets_values":["flux1-dev-fp8.safetensors","fp8_e4m3fn"]},{"id":3,"type":"XlabsSampler","pos":{"0":887,"1":57},"size":{"0":342.5999755859375,"1":282},"flags":{},"order":12,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":62,"slot_index":0,"label":"模型"},{"name":"conditioning","type":"CONDITIONING","link":18,"label":"正面条件"},{"name":"neg_conditioning","type":"CONDITIONING","link":26,"label":"负面条件"},{"name":"latent_image","type":"LATENT","link":75,"shape":7,"label":"Latent"},{"name":"controlnet_condition","type":"ControlNetCondition","link":null,"shape":7,"label":"ControlNet条件"}],"outputs":[{"name":"latent","type":"LATENT","links":[6],"shape":3,"label":"Latent"}],"properties":{"Node name for S&R":"XlabsSampler"},"widgets_values":[4,"fixed",50,1,3.5,0,1]},{"id":5,"type":"CLIPTextEncodeFlux","pos":{"0":428,"1":-169},"size":{"0":400,"1":200},"flags":{},"order":9,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":2,"slot_index":0,"label":"CLIP"}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[18],"slot_index":0,"shape":3,"label":"条件"}],"properties":{"Node name for S&R":"CLIPTextEncodeFlux"},"widgets_values":["holding sign with glowing green text \"X-LABS IP Adapter\"","holding sign with glowing green text \"X-LABS IP Adapter\"",4]},{"id":27,"type":"ApplyFluxIPAdapter","pos":{"0":642,"1":248},"size":{"0":210,"1":98},"flags":{},"order":11,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":61,"slot_index":0,"label":"模型"},{"name":"ip_adapter_flux","type":"IP_ADAPTER_FLUX","link":65,"label":"IPAdapter_Flux"},{"name":"image","type":"IMAGE","link":73,"slot_index":2,"label":"图像"}],"outputs":[{"name":"MODEL","type":"MODEL","links":[62],"slot_index":0,"shape":3,"label":"模型"}],"properties":{"Node name for S&R":"ApplyFluxIPAdapter"},"widgets_values":[0.92]},{"id":29,"type":"ImageCrop","pos":{"0":-54,"1":53},"size":{"0":315,"1":130},"flags":{},"order":7,"mode":0,"inputs":[{"name":"image","type":"IMAGE","link":55,"slot_index":0,"label":"图像"}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[],"slot_index":0,"shape":3,"label":"图像"}],"properties":{"Node name for S&R":"ImageCrop"},"widgets_values":[1024,512,4,4]},{"id":33,"type":"ImageScale","pos":{"0":-80,"1":-148},"size":{"0":315,"1":130},"flags":{},"order":8,"mode":0,"inputs":[{"name":"image","type":"IMAGE","link":72,"slot_index":0,"label":"图像"}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[73],"slot_index":0,"shape":3,"label":"图像"}],"properties":{"Node name for S&R":"ImageScale"},"widgets_values":["nearest-exact",1024,1024,"disabled"]},{"id":16,"type":"LoadImage","pos":{"0":-446,"1":-191},"size":{"0":315,"1":314},"flags":{},"order":4,"mode":0,"inputs":[],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[55,72],"slot_index":0,"shape":3,"label":"图像"},{"name":"MASK","type":"MASK","links":null,"shape":3,"label":"遮罩"}],"properties":{"Node name for S&R":"LoadImage"},"widgets_values":["statue.jpg","image"]},{"id":36,"type":"PreviewImage","pos":{"0":1663,"1":-228},"size":{"0":865.8053588867188,"1":863.5560913085938},"flags":{},"order":14,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":76,"slot_index":0,"label":"图像"}],"outputs":[],"properties":{"Node name for S&R":"PreviewImage"},"widgets_values":[]},{"id":7,"type":"VAEDecode","pos":{"0":1346,"1":-128},"size":{"0":210,"1":46},"flags":{},"order":13,"mode":0,"inputs":[{"name":"samples","type":"LATENT","link":6,"slot_index":0,"label":"Latent"},{"name":"vae","type":"VAE","link":59,"label":"VAE"}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[76],"slot_index":0,"shape":3,"label":"图像"}],"properties":{"Node name for S&R":"VAEDecode"},"widgets_values":[]},{"id":32,"type":"LoadFluxIPAdapter","pos":{"0":313,"1":147},"size":{"0":315,"1":106},"flags":{},"order":5,"mode":0,"inputs":[],"outputs":[{"name":"ipadapterFlux","type":"IP_ADAPTER_FLUX","links":[65],"slot_index":0,"shape":3,"label":"IPAdapter_Flux"}],"properties":{"Node name for S&R":"LoadFluxIPAdapter"},"widgets_values":["flux-ip-adapter.safetensors","CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors","CPU"]},{"id":4,"type":"DualCLIPLoader","pos":{"0":-275,"1":322},"size":{"0":315,"1":106},"flags":{},"order":6,"mode":0,"inputs":[],"outputs":[{"name":"CLIP","type":"CLIP","links":[2,27],"slot_index":0,"shape":3,"label":"CLIP"}],"properties":{"Node name for S&R":"DualCLIPLoader"},"widgets_values":["t5xxl_fp8_e4m3fn.safetensors","clip_l.safetensors","flux"]}],"links":[[2,4,0,5,0,"CLIP"],[6,3,0,7,0,"LATENT"],[18,5,0,3,1,"CONDITIONING"],[26,19,0,3,2,"CONDITIONING"],[27,4,0,19,0,"CLIP"],[55,16,0,29,0,"IMAGE"],[59,8,0,7,1,"VAE"],[61,10,0,27,0,"MODEL"],[62,27,0,3,0,"MODEL"],[65,32,0,27,1,"IP_ADAPTER_FLUX"],[72,16,0,33,0,"IMAGE"],[73,33,0,27,2,"IMAGE"],[75,6,0,3,3,"LATENT"],[76,7,0,36,0,"IMAGE"]],"groups":[],"config":{},"extra":{"ds":{"scale":0.6830134553650705,"offset":[469.14572083740313,252.49887402343762]}},"version":0.4}

Additional Context

(Please add any additional context or steps to reproduce the error here)

@yitao2020
Copy link

I also encountered the same problem

@qiwang1996
Copy link

metoo ;(

@qiwang1996
Copy link

qiwang1996 commented Nov 20, 2024

i got the point. need to use this clip vision.
image

@yitao2020
Copy link

i got the point. need to use this clip vision. image

Its working?

@yitao2020
Copy link

i got the point. need to use this clip vision. image

Its working?
I verify that it is useless to find it,

@qiwang1996
Copy link

i got the point. need to use this clip vision. image

Its working?
I verify that it is useless to find it,

you need to donwload it from huggingface first

@yitao2020
Copy link

i got the point. need to use this clip vision. image

Its working?
I verify that it is useless to find it,

you need to donwload it from huggingface first
I downloaded and tried

@stromyu520
Copy link
Author

我明白了。需要使用这个剪辑视觉。 图像

image

我使用报错的

@bagelbig
Copy link

In case the solution above did not work, I noticed this problem crops up when you use two different base models (Flux and SDXL), so make sure to use Flux everywhere

@Rochger
Copy link

Rochger commented Dec 3, 2024

i got the point. need to use this clip vision. image

yep,it worked

@htn1985
Copy link

htn1985 commented Dec 16, 2024

The model does not match, try another one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants