We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
got prompt model weight dtype torch.float16, manual cast: None model_type EPS Using pytorch attention in VAE Using pytorch attention in VAE Requested to load SDXLClipModel Loading 1 new model loaded completely 0.0 1560.802734375 True Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}} find model: D:\ComfyUI_windows_portable\ComfyUI\models\insightface\models\antelopev2\1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0 Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}} find model: D:\ComfyUI_windows_portable\ComfyUI\models\insightface\models\antelopev2\2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0 Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}} find model: D:\ComfyUI_windows_portable\ComfyUI\models\insightface\models\antelopev2\genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0 Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}} find model: D:\ComfyUI_windows_portable\ComfyUI\models\insightface\models\antelopev2\glintr100.onnx recognition ['None', 3, 112, 112] 127.5 127.5 Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}} find model: D:\ComfyUI_windows_portable\ComfyUI\models\insightface\models\antelopev2\scrfd_10g_bnkps.onnx detection [1, 3, '?', '?'] 127.5 128.0 set det-size: (640, 640) Loaded EVA02-CLIP-L-14-336 model config. Shape of rope freq: torch.Size([576, 64]) !!! Exception during processing !!! No module named 'fused_layer_norm_cuda' Traceback (most recent call last): File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 165, in _map_node_over_list process_inputs({}) File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\PuLID_ComfyUI\pulid.py", line 259, in load_eva_clip model, _, _ = create_model_and_transforms('EVA02-CLIP-L-14-336', 'eva_clip', force_custom_clip=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\PuLID_ComfyUI\eva_clip\factory.py", line 377, in create_model_and_transforms model = create_model( ^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\PuLID_ComfyUI\eva_clip\factory.py", line 270, in create_model model = CustomCLIP(**model_cfg, cast_dtype=cast_dtype) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\PuLID_ComfyUI\eva_clip\model.py", line 281, in init self.visual = _build_vision_tower(embed_dim, vision_cfg, quick_gelu, cast_dtype) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\PuLID_ComfyUI\eva_clip\model.py", line 110, in build_vision_tower visual = EVAVisionTransformer( ^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\PuLID_ComfyUI\eva_clip\eva_vit_model.py", line 418, in init Block( File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\PuLID_ComfyUI\eva_clip\eva_vit_model.py", line 253, in init self.norm1 = norm_layer(dim) ^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\apex\normalization\fused_layer_norm.py", line 294, in init fused_layer_norm_cuda = importlib.import_module("fused_layer_norm_cuda") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "importlib_init.py", line 90, in import_module File "", line 1387, in _gcd_import File "", line 1360, in _find_and_load File "", line 1324, in _find_and_load_unlocked ModuleNotFoundError: No module named 'fused_layer_norm_cuda'
The text was updated successfully, but these errors were encountered:
No branches or pull requests
got prompt
model weight dtype torch.float16, manual cast: None
model_type EPS
Using pytorch attention in VAE
Using pytorch attention in VAE
Requested to load SDXLClipModel
Loading 1 new model
loaded completely 0.0 1560.802734375 True
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: D:\ComfyUI_windows_portable\ComfyUI\models\insightface\models\antelopev2\1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: D:\ComfyUI_windows_portable\ComfyUI\models\insightface\models\antelopev2\2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: D:\ComfyUI_windows_portable\ComfyUI\models\insightface\models\antelopev2\genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: D:\ComfyUI_windows_portable\ComfyUI\models\insightface\models\antelopev2\glintr100.onnx recognition ['None', 3, 112, 112] 127.5 127.5
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: D:\ComfyUI_windows_portable\ComfyUI\models\insightface\models\antelopev2\scrfd_10g_bnkps.onnx detection [1, 3, '?', '?'] 127.5 128.0
set det-size: (640, 640)
Loaded EVA02-CLIP-L-14-336 model config.
Shape of rope freq: torch.Size([576, 64])
!!! Exception during processing !!! No module named 'fused_layer_norm_cuda'
Traceback (most recent call last):
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 165, in _map_node_over_list
process_inputs({})
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\PuLID_ComfyUI\pulid.py", line 259, in load_eva_clip
model, _, _ = create_model_and_transforms('EVA02-CLIP-L-14-336', 'eva_clip', force_custom_clip=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\PuLID_ComfyUI\eva_clip\factory.py", line 377, in create_model_and_transforms
model = create_model(
^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\PuLID_ComfyUI\eva_clip\factory.py", line 270, in create_model
model = CustomCLIP(**model_cfg, cast_dtype=cast_dtype)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\PuLID_ComfyUI\eva_clip\model.py", line 281, in init
self.visual = _build_vision_tower(embed_dim, vision_cfg, quick_gelu, cast_dtype)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\PuLID_ComfyUI\eva_clip\model.py", line 110, in build_vision_tower
visual = EVAVisionTransformer(
^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\PuLID_ComfyUI\eva_clip\eva_vit_model.py", line 418, in init
Block(
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\PuLID_ComfyUI\eva_clip\eva_vit_model.py", line 253, in init
self.norm1 = norm_layer(dim)
^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\apex\normalization\fused_layer_norm.py", line 294, in init
fused_layer_norm_cuda = importlib.import_module("fused_layer_norm_cuda")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "importlib_init.py", line 90, in import_module
File "", line 1387, in _gcd_import
File "", line 1360, in _find_and_load
File "", line 1324, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'fused_layer_norm_cuda'
The text was updated successfully, but these errors were encountered: