Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pulid_v1.1 model cannot work #33

Open
Amazon90 opened this issue Nov 24, 2024 · 1 comment
Open

pulid_v1.1 model cannot work #33

Amazon90 opened this issue Nov 24, 2024 · 1 comment

Comments

@Amazon90
Copy link

https://huggingface.co/guozinan/PuLID/blob/main/pulid_v1.1.safetensors

ComfyUI Error Report

Error Details

  • Node ID: 8
  • Node Type: ApplyEcomIDAdvanced
  • Exception Type: RuntimeError
  • Exception Message: Error(s) in loading state_dict for IDEncoder:
    Missing key(s) in state_dict: "body.0.weight", "body.0.bias", "body.1.weight", "body.1.bias", "body.3.weight", "body.3.bias", "body.4.weight", "body.4.bias", "body.6.weight", "body.6.bias", "mapping_0.0.weight", "mapping_0.0.bias", "mapping_0.1.weight", "mapping_0.1.bias", "mapping_0.3.weight", "mapping_0.3.bias", "mapping_0.4.weight", "mapping_0.4.bias", "mapping_0.6.weight", "mapping_0.6.bias", "mapping_patch_0.0.weight", "mapping_patch_0.0.bias", "mapping_patch_0.1.weight", "mapping_patch_0.1.bias", "mapping_patch_0.3.weight", "mapping_patch_0.3.bias", "mapping_patch_0.4.weight", "mapping_patch_0.4.bias", "mapping_patch_0.6.weight", "mapping_patch_0.6.bias", "mapping_1.0.weight", "mapping_1.0.bias", "mapping_1.1.weight", "mapping_1.1.bias", "mapping_1.3.weight", "mapping_1.3.bias", "mapping_1.4.weight", "mapping_1.4.bias", "mapping_1.6.weight", "mapping_1.6.bias", "mapping_patch_1.0.weight", "mapping_patch_1.0.bias", "mapping_patch_1.1.weight", "mapping_patch_1.1.bias", "mapping_patch_1.3.weight", "mapping_patch_1.3.bias", "mapping_patch_1.4.weight", "mapping_patch_1.4.bias", "mapping_patch_1.6.weight", "mapping_patch_1.6.bias", "mapping_2.0.weight", "mapping_2.0.bias", "mapping_2.1.weight", "mapping_2.1.bias", "mapping_2.3.weight", "mapping_2.3.bias", "mapping_2.4.weight", "mapping_2.4.bias", "mapping_2.6.weight", "mapping_2.6.bias", "mapping_patch_2.0.weight", "mapping_patch_2.0.bias", "mapping_patch_2.1.weight", "mapping_patch_2.1.bias", "mapping_patch_2.3.weight", "mapping_patch_2.3.bias", "mapping_patch_2.4.weight", "mapping_patch_2.4.bias", "mapping_patch_2.6.weight", "mapping_patch_2.6.bias", "mapping_3.0.weight", "mapping_3.0.bias", "mapping_3.1.weight", "mapping_3.1.bias", "mapping_3.3.weight", "mapping_3.3.bias", "mapping_3.4.weight", "mapping_3.4.bias", "mapping_3.6.weight", "mapping_3.6.bias", "mapping_patch_3.0.weight", "mapping_patch_3.0.bias", "mapping_patch_3.1.weight", "mapping_patch_3.1.bias", "mapping_patch_3.3.weight", "mapping_patch_3.3.bias", "mapping_patch_3.4.weight", "mapping_patch_3.4.bias", "mapping_patch_3.6.weight", "mapping_patch_3.6.bias", "mapping_4.0.weight", "mapping_4.0.bias", "mapping_4.1.weight", "mapping_4.1.bias", "mapping_4.3.weight", "mapping_4.3.bias", "mapping_4.4.weight", "mapping_4.4.bias", "mapping_4.6.weight", "mapping_4.6.bias", "mapping_patch_4.0.weight", "mapping_patch_4.0.bias", "mapping_patch_4.1.weight", "mapping_patch_4.1.bias", "mapping_patch_4.3.weight", "mapping_patch_4.3.bias", "mapping_patch_4.4.weight", "mapping_patch_4.4.bias", "mapping_patch_4.6.weight", "mapping_patch_4.6.bias".

Stack Trace

  File "D:\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)

  File "D:\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI\custom_nodes\SDXL_EcomID_ComfyUI\EcomID.py", line 453, in apply_EcomID
    pulid_model = PulidModel(pulid).to(device, dtype=dtype)
                  ^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI\custom_nodes\SDXL_EcomID_ComfyUI\EcomID.py", line 54, in __init__
    self.image_proj_model.load_state_dict(model["image_proj"])

  File "D:\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 2584, in load_state_dict
    raise RuntimeError(

System Information

  • ComfyUI Version: v0.3.4
  • Arguments: D:\ComfyUI\main.py --auto-launch --preview-method auto --disable-cuda-malloc --fast
  • OS: nt
  • Python Version: 3.12.7 (tags/v3.12.7:0b05ead, Oct 1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)]
  • Embedded Python: false
  • PyTorch Version: 2.5.1+cu124

Devices

  • Name: cuda:0 NVIDIA GeForce RTX 4070 Ti : cudaMallocAsync
    • Type: cuda
    • VRAM Total: 12878086144
    • VRAM Free: 5204307086
    • Torch VRAM Total: 5301600256
    • Torch VRAM Free: 66284686
@xuhongming251
Copy link

https://huggingface.co/Runzy/ip-adapter_pulid/blob/main/ip-adapter_pulidv1.1_sdxl_fp16.safetensors

Error Details

  • Node Type: ApplyEcomIDAdvanced
  • Exception Type: RuntimeError
  • Exception Message: Error(s) in loading state_dict for IDEncoder:
    Missing key(s) in state_dict: "body.0.weight", "body.0.bias", "body.1.weight", "body.1.bias", "body.3.weight", "body.3.bias", "body.4.weight", "body.4.bias", "body.6.weight", "body.6.bias", "mapping_patch_0.0.weight", "mapping_patch_0.0.bias", "mapping_patch_0.1.weight", "mapping_patch_0.1.bias", "mapping_patch_0.3.weight", "mapping_patch_0.3.bias", "mapping_patch_0.4.weight", "mapping_patch_0.4.bias", "mapping_patch_0.6.weight", "mapping_patch_0.6.bias", "mapping_patch_1.0.weight", "mapping_patch_1.0.bias", "mapping_patch_1.1.weight", "mapping_patch_1.1.bias", "mapping_patch_1.3.weight", "mapping_patch_1.3.bias", "mapping_patch_1.4.weight", "mapping_patch_1.4.bias", "mapping_patch_1.6.weight", "mapping_patch_1.6.bias", "mapping_patch_2.0.weight", "mapping_patch_2.0.bias", "mapping_patch_2.1.weight", "mapping_patch_2.1.bias", "mapping_patch_2.3.weight", "mapping_patch_2.3.bias", "mapping_patch_2.4.weight", "mapping_patch_2.4.bias", "mapping_patch_2.6.weight", "mapping_patch_2.6.bias", "mapping_patch_3.0.weight", "mapping_patch_3.0.bias", "mapping_patch_3.1.weight", "mapping_patch_3.1.bias", "mapping_patch_3.3.weight", "mapping_patch_3.3.bias", "mapping_patch_3.4.weight", "mapping_patch_3.4.bias", "mapping_patch_3.6.weight", "mapping_patch_3.6.bias", "mapping_patch_4.0.weight", "mapping_patch_4.0.bias", "mapping_patch_4.1.weight", "mapping_patch_4.1.bias", "mapping_patch_4.3.weight", "mapping_patch_4.3.bias", "mapping_patch_4.4.weight", "mapping_patch_4.4.bias", "mapping_patch_4.6.weight", "mapping_patch_4.6.bias".
    Unexpected key(s) in state_dict: "id_embedding_mapping.0.bias", "id_embedding_mapping.0.weight", "id_embedding_mapping.1.bias", "id_embedding_mapping.1.weight", "id_embedding_mapping.3.bias", "id_embedding_mapping.3.weight", "id_embedding_mapping.4.bias", "id_embedding_mapping.4.weight", "id_embedding_mapping.6.bias", "id_embedding_mapping.6.weight", "latents", "layers.0.0.norm1.bias", "layers.0.0.norm1.weight", "layers.0.0.norm2.bias", "layers.0.0.norm2.weight", "layers.0.0.to_kv.weight", "layers.0.0.to_out.weight", "layers.0.0.to_q.weight", "layers.0.1.0.bias", "layers.0.1.0.weight", "layers.0.1.1.weight", "layers.0.1.3.weight", "layers.1.0.norm1.bias", "layers.1.0.norm1.weight", "layers.1.0.norm2.bias", "layers.1.0.norm2.weight", "layers.1.0.to_kv.weight", "layers.1.0.to_out.weight", "layers.1.0.to_q.weight", "layers.1.1.0.bias", "layers.1.1.0.weight", "layers.1.1.1.weight", "layers.1.1.3.weight", "layers.2.0.norm1.bias", "layers.2.0.norm1.weight", "layers.2.0.norm2.bias", "layers.2.0.norm2.weight", "layers.2.0.to_kv.weight", "layers.2.0.to_out.weight", "layers.2.0.to_q.weight", "layers.2.1.0.bias", "layers.2.1.0.weight", "layers.2.1.1.weight", "layers.2.1.3.weight", "layers.3.0.norm1.bias", "layers.3.0.norm1.weight", "layers.3.0.norm2.bias", "layers.3.0.norm2.weight", "layers.3.0.to_kv.weight", "layers.3.0.to_out.weight", "layers.3.0.to_q.weight", "layers.3.1.0.bias", "layers.3.1.0.weight", "layers.3.1.1.weight", "layers.3.1.3.weight", "layers.4.0.norm1.bias", "layers.4.0.norm1.weight", "layers.4.0.norm2.bias", "layers.4.0.norm2.weight", "layers.4.0.to_kv.weight", "layers.4.0.to_out.weight", "layers.4.0.to_q.weight", "layers.4.1.0.bias", "layers.4.1.0.weight", "layers.4.1.1.weight", "layers.4.1.3.weight", "layers.5.0.norm1.bias", "layers.5.0.norm1.weight", "layers.5.0.norm2.bias", "layers.5.0.norm2.weight", "layers.5.0.to_kv.weight", "layers.5.0.to_out.weight", "layers.5.0.to_q.weight", "layers.5.1.0.bias", "layers.5.1.0.weight", "layers.5.1.1.weight", "layers.5.1.3.weight", "layers.6.0.norm1.bias", "layers.6.0.norm1.weight", "layers.6.0.norm2.bias", "layers.6.0.norm2.weight", "layers.6.0.to_kv.weight", "layers.6.0.to_out.weight", "layers.6.0.to_q.weight", "layers.6.1.0.bias", "layers.6.1.0.weight", "layers.6.1.1.weight", "layers.6.1.3.weight", "layers.7.0.norm1.bias", "layers.7.0.norm1.weight", "layers.7.0.norm2.bias", "layers.7.0.norm2.weight", "layers.7.0.to_kv.weight", "layers.7.0.to_out.weight", "layers.7.0.to_q.weight", "layers.7.1.0.bias", "layers.7.1.0.weight", "layers.7.1.1.weight", "layers.7.1.3.weight", "layers.8.0.norm1.bias", "layers.8.0.norm1.weight", "layers.8.0.norm2.bias", "layers.8.0.norm2.weight", "layers.8.0.to_kv.weight", "layers.8.0.to_out.weight", "layers.8.0.to_q.weight", "layers.8.1.0.bias", "layers.8.1.0.weight", "layers.8.1.1.weight", "layers.8.1.3.weight", "layers.9.0.norm1.bias", "layers.9.0.norm1.weight", "layers.9.0.norm2.bias", "layers.9.0.norm2.weight", "layers.9.0.to_kv.weight", "layers.9.0.to_out.weight", "layers.9.0.to_q.weight", "layers.9.1.0.bias", "layers.9.1.0.weight", "layers.9.1.1.weight", "layers.9.1.3.weight", "proj_out".
    size mismatch for mapping_0.6.weight: copying a param with shape torch.Size([1024, 1024]) from checkpoint, the shape in current model is torch.Size([2048, 1024]).
    size mismatch for mapping_0.6.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([2048]).
    size mismatch for mapping_1.6.weight: copying a param with shape torch.Size([1024, 1024]) from checkpoint, the shape in current model is torch.Size([2048, 1024]).
    size mismatch for mapping_1.6.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([2048]).
    size mismatch for mapping_2.6.weight: copying a param with shape torch.Size([1024, 1024]) from checkpoint, the shape in current model is torch.Size([2048, 1024]).
    size mismatch for mapping_2.6.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([2048]).
    size mismatch for mapping_3.6.weight: copying a param with shape torch.Size([1024, 1024]) from checkpoint, the shape in current model is torch.Size([2048, 1024]).
    size mismatch for mapping_3.6.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([2048]).
    size mismatch for mapping_4.6.weight: copying a param with shape torch.Size([1024, 1024]) from checkpoint, the shape in current model is torch.Size([2048, 1024]).
    size mismatch for mapping_4.6.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([2048]).

Stack Trace

  File "/modeldata/sd/ComfyUI/execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/modeldata/sd/ComfyUI/execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/modeldata/sd/ComfyUI/execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)

  File "/modeldata/sd/ComfyUI/execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/modeldata/sd/ComfyUI/custom_nodes/SDXL_EcomID_ComfyUI/EcomID.py", line 453, in apply_EcomID
    pulid_model = PulidModel(pulid).to(device, dtype=dtype)
                  ^^^^^^^^^^^^^^^^^

  File "/modeldata/sd/ComfyUI/custom_nodes/SDXL_EcomID_ComfyUI/EcomID.py", line 54, in __init__
    self.image_proj_model.load_state_dict(model["image_proj"])

  File "/usr/local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 2215, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants