Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KeyError: "ImageFaceFusionPipeline: KeyError('image-face-fusion is already registered in models[image-face-fusion]')" #429

Open
2 tasks done
Network-Sec opened this issue Jun 23, 2024 · 0 comments

Comments

@Network-Sec
Copy link

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Is EasyPhoto the latest version?

  • I have updated EasyPhoto to the latest version and the bug still exists.

What happened?

2024-06-23 13:17:55,114 - EasyPhoto - ControlNet unit number: 5
2024-06-23 13:17:55,114 - EasyPhoto - Found 1 user id(s), but only 0 image prompt(s) for IP-Adapter Control. Use the reference image corresponding to the user instead.
Cleanup completed.
Traceback (most recent call last):
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\modelscope\models\builder.py", line 35, in build_model
    model = build_from_cfg(
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\modelscope\utils\registry.py", line 184, in build_from_cfg
    LazyImportModule.import_module(sig)
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\modelscope\utils\import_utils.py", line 463, in import_module
    importlib.import_module(module_name)
  File "C:\Users\occide\miniconda3\envs\stable-diffusion-webui\lib\importlib\__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\modelscope\models\cv\image_face_fusion\image_face_fusion.py", line 36, in <module>
    class ImageFaceFusion(TorchModel):
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\modelscope\utils\registry.py", line 125, in _register
    self._register_module(
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\modelscope\utils\registry.py", line 75, in _register_module
    raise KeyError(f'{module_name} is already registered in '
KeyError: 'image-face-fusion is already registered in models[image-face-fusion]'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\modelscope\utils\registry.py", line 212, in build_from_cfg
    return obj_cls(**args)
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\modelscope\pipelines\cv\image_face_fusion_pipeline.py", line 43, in __init__
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\modelscope\pipelines\base.py", line 99, in __init__
    self.model = self.initiate_single_model(model)
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\modelscope\pipelines\base.py", line 53, in initiate_single_model
    return Model.from_pretrained(
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\modelscope\models\base\base_model.py", line 179, in from_pretrained
    model = build_model(model_cfg, task_name=task_name)
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\modelscope\models\builder.py", line 43, in build_model
    raise KeyError(e)
KeyError: KeyError('image-face-fusion is already registered in models[image-face-fusion]')

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
    result = await self.call_function(
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "C:\Users\occide\miniconda3\envs\stable-diffusion-webui\lib\contextlib.py", line 79, in inner
    return func(*args, **kwds)
  File "C:\Users\occide\miniconda3\envs\stable-diffusion-webui\lib\contextlib.py", line 79, in inner
    return func(*args, **kwds)
  File "D:\AI-Tools\stable-diffusion-webui\extensions\sd-webui-EasyPhoto\scripts\easyphoto_infer.py", line 2255, in easyphoto_video_infer_forward
    image_face_fusion = pipeline(Tasks.image_face_fusion, model="damo/cv_unet-image-face-fusion_damo", model_revision="v1.3")
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\modelscope\pipelines\builder.py", line 163, in pipeline
    return build_pipeline(cfg, task_name=task)
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\modelscope\pipelines\builder.py", line 67, in build_pipeline
    return build_from_cfg(
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\modelscope\utils\registry.py", line 215, in build_from_cfg
    raise type(e)(f'{obj_cls.__name__}: {e}')
KeyError: "ImageFaceFusionPipeline: KeyError('image-face-fusion is already registered in models[image-face-fusion]')"

Steps to reproduce the problem

  1. Do any inference on Easy-Photo

What should have happened?

Image generated

Commit where the problem happens

webui:
EastPhoto:

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

No

List of enabled extensions

Stable-Diffusion-WebUI-TensorRT
facechain
sd-webui-EasyPhoto
sd-webui-controlnet
sd-webui-faceswaplab
stable-diffusion-webui-images-browser

Console logs

Python 3.10.6 | packaged by conda-forge | (main, Oct 24 2022, 16:02:16) [MSC v.1916 64 bit (AMD64)]
Version: v1.9.4
Commit hash: feee37d75f1b168768014e4634dcb156ee649c05
Installing sd-webui-controlnet requirement: changing opencv-python version from 4.10.0.84 to 4.8.0
is_installed check for tensorflow-cpu failed as 'spec is None'
Installing requirements for easyphoto-webui
Installing requirements for tensorflow
Faceswaplab : Use GPU requirements
Checking faceswaplab requirements
0.007708102000009376
removing nvidia-cudnn-cu11
Launching Web UI with arguments: --clip-models-path D:\AI-Tools\stable-diffusion-webui\model_cache --clip-models-path D:\AI-Tools\stable-diffusion-webui\model_cache
2024-06-23 12:28:49.432480: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-06-23 12:28:51.877007: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
2024-06-23 12:29:06,139 - modelscope - INFO - PyTorch version 2.1.2+cu121 Found.
2024-06-23 12:29:06,143 - modelscope - INFO - TensorFlow version 2.16.1 Found.
2024-06-23 12:29:06,143 - modelscope - INFO - Loading ast index from C:\Users\username\.cache\modelscope\ast_indexer
2024-06-23 12:29:06,482 - modelscope - INFO - Loading done! Current index file version is 1.9.3, with md5 6d48453e156509617a799fa1de297b0f and a total number of 943 components indexed
D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\transformers\transformer_2d.py:34: FutureWarning: `Transformer2DModelOutput` is deprecated and will be removed in version 1.0.0. Importing `Transformer2DModelOutput` from `diffusers.models.transformer_2d` is deprecated and this will be removed in a future version. Please use `from diffusers.models.modeling_outputs import Transformer2DModelOutput`, instead.
  deprecate("Transformer2DModelOutput", "1.0.0", deprecation_message)
ControlNet preprocessor location: D:\AI-Tools\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads
2024-06-23 12:29:09,185 - ControlNet - INFO - ControlNet v1.1.449
Loading weights [010be7341c] from D:\AI-Tools\stable-diffusion-webui\models\Stable-diffusion\Juggernaut_X_RunDiffusion_Hyper.safetensors
AnimateDiffScript init
D:\AI-Tools\stable-diffusion-webui\modules\gradio_extensons.py:25: GradioDeprecationWarning: `optional` parameter is deprecated, and it has no effect
  res = original_IOComponent_init(self, *args, **kwargs)
AnimateDiffScript init
2024-06-23 12:29:11,308 - ControlNet - INFO - ControlNet UI callback registered.
No config file found for FilmVelvia3. You can generate it in the LoRA tab.
No config file found for haydeen. You can generate it in the LoRA tab.
No config file found for Hayden. You can generate it in the LoRA tab.
Creating model from config: D:\AI-Tools\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
秋日胡杨风(Autumn populus euphratica style)
萧瑟秋天风(Bleak autumn style)
漫画风(Cartoon)
旗袍风(Cheongsam)
中国新年风(Chinese New Year Style)
冬季汉服(Chinese winter hanfu)
圣诞风(Christmas)
Applying attention optimization: Doggettx... done.
Model loaded in 16.2s (load weights from disk: 1.1s, create model: 1.0s, apply weights to model: 12.6s, move model to device: 0.2s, load textual inversion embeddings: 0.2s, calculate empty prompt: 1.0s).
炫彩少女风(Colorful rainbow style)
自然清冷风(Cool tones)
西部牛仔风(Cowboy style)
林中鹿女风(Deer girl)
主题乐园风(Disneyland)
海洋风(Ocean)
敦煌风(Dunhuang)
多巴胺风格(Colourful Style)
中华刺绣风(Embroidery)
欧式田野风(European fields)
仙女风(Fairy style)
时尚墨镜风(Fashion glasses)
火红少女风(Flame Red Style)
花园风(Flowers)
绅士风(Gentleman style)
国风(GuoFeng Style)
嘻哈风(Hiphop style)
夜景港风(Hong Kong night)
印度风(India)
雪山羽绒服风(Jacket in Snow Mountain)
Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 248/248 [00:00<?, ?B/s]
日系和服风(Kimono Style)
哥特洛丽塔(Gothic Lolita)
洛丽塔(Lolita)
花环洛丽塔(Flora Lolita)
女仆风(Maid style)
机械风(Mechanical)
男士西装风(Men's Suit)
苗族服装风(Miao style)
模特风(Model style)
蒙古草原风(Mongolian)
机车风(Motorcycle race style)
夏日海滩风(Summer Ocean Vibe)
京剧名旦风(Female role in Peking opera)
拍立得风(Polaroid style)
贵族公主风(Princess costum)
雨夜(Rainy night)
红发礼服风(Red Style)
复古风(Retro Style)
漫游宇航员(Roaming Astronaut)
校服风(School uniform)
科幻风(Science fiction style)
绿茵球场风(Soccer Field)
街拍风(Street style)
藏族服饰风(Tibetan clothing style)
古风(Traditional chinese style)
Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 302/302 [00:00<?, ?B/s]
丁达尔风(Tyndall Light)
梦幻深海风(Sea World)
婚纱风(Wedding dress)
婚纱风-2(Wedding dress 2)
西部牛仔风(West cowboy)
西部风(Wild west style)
女巫风(Witch style)
绿野仙踪(Wizard of Oz)
藏族风(ZangZu Style)
壮族服装风(Zhuang style)
盔甲风(Armor)
芭比娃娃(Barbie Doll)
休闲生活风(Casual Lifestyle)
凤冠霞帔(Chinese traditional gorgeous suit)
赛博朋克(Cybernetics punk)
优雅公主(Elegant Princess)
女士晚礼服(Gown)
汉服风(Hanfu)
白月光(Innocent Girl in White Dress)
鬼马少女(Pixy Girl)
白雪公主(Snow White)
T恤衫(T-shirt)
工作服(Working suit)
2024-06-23 12:31:23,722 - mmcv - INFO - initialize PAFPN with init_cfg {'type': 'Xavier', 'layer': 'Conv2d', 'distribution': 'uniform'}
2024-06-23 12:31:23,724 - mmcv - INFO -
lateral_convs.0.conv.weight - torch.Size([16, 64, 1, 1]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:31:23,724 - mmcv - INFO -
lateral_convs.0.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:31:23,724 - mmcv - INFO -
lateral_convs.1.conv.weight - torch.Size([16, 120, 1, 1]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:31:23,724 - mmcv - INFO -
lateral_convs.1.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:31:23,724 - mmcv - INFO -
lateral_convs.2.conv.weight - torch.Size([16, 160, 1, 1]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:31:23,724 - mmcv - INFO -
lateral_convs.2.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:31:23,724 - mmcv - INFO -
fpn_convs.0.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:31:23,725 - mmcv - INFO -
fpn_convs.0.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:31:23,725 - mmcv - INFO -
fpn_convs.1.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:31:23,725 - mmcv - INFO -
fpn_convs.1.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:31:23,725 - mmcv - INFO -
fpn_convs.2.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:31:23,725 - mmcv - INFO -
fpn_convs.2.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:31:23,725 - mmcv - INFO -
downsample_convs.0.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:31:23,725 - mmcv - INFO -
downsample_convs.0.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:31:23,725 - mmcv - INFO -
downsample_convs.1.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:31:23,725 - mmcv - INFO -
downsample_convs.1.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:31:23,726 - mmcv - INFO -
pafpn_convs.0.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:31:23,726 - mmcv - INFO -
pafpn_convs.0.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:31:23,726 - mmcv - INFO -
pafpn_convs.1.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:31:23,726 - mmcv - INFO -
pafpn_convs.1.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

load checkpoint from local path: C:\Users\username\.cache\modelscope\hub\damo\cv_ddsar_face-detection_iclr23-damofd\pytorch_model.pt
2024-06-23 12:31:39,010 - mmcv - INFO - initialize PAFPN with init_cfg {'type': 'Xavier', 'layer': 'Conv2d', 'distribution': 'uniform'}
2024-06-23 12:31:39,012 - mmcv - INFO -
lateral_convs.0.conv.weight - torch.Size([16, 64, 1, 1]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:31:39,012 - mmcv - INFO -
lateral_convs.0.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:31:39,012 - mmcv - INFO -
lateral_convs.1.conv.weight - torch.Size([16, 120, 1, 1]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:31:39,012 - mmcv - INFO -
lateral_convs.1.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:31:39,013 - mmcv - INFO -
lateral_convs.2.conv.weight - torch.Size([16, 160, 1, 1]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:31:39,013 - mmcv - INFO -
lateral_convs.2.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:31:39,013 - mmcv - INFO -
fpn_convs.0.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:31:39,013 - mmcv - INFO -
fpn_convs.0.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:31:39,013 - mmcv - INFO -
fpn_convs.1.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:31:39,013 - mmcv - INFO -
fpn_convs.1.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:31:39,013 - mmcv - INFO -
fpn_convs.2.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:31:39,013 - mmcv - INFO -
fpn_convs.2.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:31:39,013 - mmcv - INFO -
downsample_convs.0.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:31:39,014 - mmcv - INFO -
downsample_convs.0.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:31:39,014 - mmcv - INFO -
downsample_convs.1.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:31:39,014 - mmcv - INFO -
downsample_convs.1.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:31:39,014 - mmcv - INFO -
pafpn_convs.0.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:31:39,014 - mmcv - INFO -
pafpn_convs.0.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:31:39,014 - mmcv - INFO -
pafpn_convs.1.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:31:39,014 - mmcv - INFO -
pafpn_convs.1.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

load checkpoint from local path: C:\Users\username\.cache\modelscope\hub\damo\cv_ddsar_face-detection_iclr23-damofd\pytorch_model.pt
Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 12.4k/12.4k [00:00<00:00, 1.52MB/s]
2024-06-23 12:32:16,426 - mmcv - INFO - initialize PAFPN with init_cfg {'type': 'Xavier', 'layer': 'Conv2d', 'distribution': 'uniform'}
2024-06-23 12:32:16,427 - mmcv - INFO -
lateral_convs.0.conv.weight - torch.Size([16, 64, 1, 1]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:32:16,427 - mmcv - INFO -
lateral_convs.0.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:32:16,427 - mmcv - INFO -
lateral_convs.1.conv.weight - torch.Size([16, 120, 1, 1]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:32:16,427 - mmcv - INFO -
lateral_convs.1.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:32:16,427 - mmcv - INFO -
lateral_convs.2.conv.weight - torch.Size([16, 160, 1, 1]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:32:16,428 - mmcv - INFO -
lateral_convs.2.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:32:16,428 - mmcv - INFO -
fpn_convs.0.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:32:16,428 - mmcv - INFO -
fpn_convs.0.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:32:16,428 - mmcv - INFO -
fpn_convs.1.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:32:16,428 - mmcv - INFO -
fpn_convs.1.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:32:16,428 - mmcv - INFO -
fpn_convs.2.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:32:16,428 - mmcv - INFO -
fpn_convs.2.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:32:16,428 - mmcv - INFO -
downsample_convs.0.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:32:16,428 - mmcv - INFO -
downsample_convs.0.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:32:16,429 - mmcv - INFO -
downsample_convs.1.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:32:16,429 - mmcv - INFO -
downsample_convs.1.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:32:16,429 - mmcv - INFO -
pafpn_convs.0.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:32:16,430 - mmcv - INFO -
pafpn_convs.0.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:32:16,430 - mmcv - INFO -
pafpn_convs.1.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:32:16,430 - mmcv - INFO -
pafpn_convs.1.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

load checkpoint from local path: C:\Users\username\.cache\modelscope\hub\damo\cv_ddsar_face-detection_iclr23-damofd\pytorch_model.pt
Loading pipeline components...:   0%|                                                                                                                                                                | 0/6 [00:00<?, ?it/s]An error occurred while trying to fetch C:\Users\username\.cache\modelscope\hub\YorickHe\majicmixRealistic_v6\realistic\vae: Error no file named diffusion_pytorch_model.safetensors found in directory C:\Users\username\.cache\modelscope\hub\YorickHe\majicmixRealistic_v6\realistic\vae.
Defaulting to unsafe serialization. Pass `allow_pickle=False` to raise an error instead.
Loading pipeline components...:  17%|█████████████████████████▎                                                                                                                              | 1/6 [00:00<00:02,  2.10it/s]An error occurred while trying to fetch C:\Users\username\.cache\modelscope\hub\YorickHe\majicmixRealistic_v6\realistic\unet: Error no file named diffusion_pytorch_model.safetensors found in directory C:\Users\username\.cache\modelscope\hub\YorickHe\majicmixRealistic_v6\realistic\unet.
Defaulting to unsafe serialization. Pass `allow_pickle=False` to raise an error instead.
Loading pipeline components...: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6/6 [00:05<00:00,  1.08it/s]
You have disabled the safety checker for <class 'diffusers.pipelines.controlnet.pipeline_controlnet.StableDiffusionControlNetPipeline'> by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
Loading pipeline components...:   0%|                                                                                                                                                                | 0/6 [00:00<?, ?it/s]An error occurred while trying to fetch C:\Users\username\.cache\modelscope\hub\ly261666\cv_portrait_model\film/film\vae: Error no file named diffusion_pytorch_model.safetensors found in directory C:\Users\username\.cache\modelscope\hub\ly261666\cv_portrait_model\film/film\vae.
Defaulting to unsafe serialization. Pass `allow_pickle=False` to raise an error instead.
Loading pipeline components...:  17%|█████████████████████████▎                                                                                                                              | 1/6 [00:00<00:02,  1.94it/s]An error occurred while trying to fetch C:\Users\username\.cache\modelscope\hub\ly261666\cv_portrait_model\film/film\unet: Error no file named diffusion_pytorch_model.safetensors found in directory C:\Users\username\.cache\modelscope\hub\ly261666\cv_portrait_model\film/film\unet.
Defaulting to unsafe serialization. Pass `allow_pickle=False` to raise an error instead.
Loading pipeline components...: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6/6 [00:05<00:00,  1.05it/s]
You have disabled the safety checker for <class 'diffusers.pipelines.controlnet.pipeline_controlnet.StableDiffusionControlNetPipeline'> by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
Pipelines loaded with `dtype=torch.float16` cannot run with `cpu` device. It is not recommended to move them to `cpu` as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support for`float16` operations on this device in PyTorch. Please, remove the `torch_dtype=torch.float16` argument, or use another device for inference.
Pipelines loaded with `dtype=torch.float16` cannot run with `cpu` device. It is not recommended to move them to `cpu` as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support for`float16` operations on this device in PyTorch. Please, remove the `torch_dtype=torch.float16` argument, or use another device for inference.
Pipelines loaded with `dtype=torch.float16` cannot run with `cpu` device. It is not recommended to move them to `cpu` as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support for`float16` operations on this device in PyTorch. Please, remove the `torch_dtype=torch.float16` argument, or use another device for inference.
Pipelines loaded with `dtype=torch.float16` cannot run with `cpu` device. It is not recommended to move them to `cpu` as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support for`float16` operations on this device in PyTorch. Please, remove the `torch_dtype=torch.float16` argument, or use another device for inference.
Pipelines loaded with `dtype=torch.float16` cannot run with `cpu` device. It is not recommended to move them to `cpu` as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support for`float16` operations on this device in PyTorch. Please, remove the `torch_dtype=torch.float16` argument, or use another device for inference.
Pipelines loaded with `dtype=torch.float16` cannot run with `cpu` device. It is not recommended to move them to `cpu` as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support for`float16` operations on this device in PyTorch. Please, remove the `torch_dtype=torch.float16` argument, or use another device for inference.
Pipelines loaded with `dtype=torch.float16` cannot run with `cpu` device. It is not recommended to move them to `cpu` as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support for`float16` operations on this device in PyTorch. Please, remove the `torch_dtype=torch.float16` argument, or use another device for inference.
Pipelines loaded with `dtype=torch.float16` cannot run with `cpu` device. It is not recommended to move them to `cpu` as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support for`float16` operations on this device in PyTorch. Please, remove the `torch_dtype=torch.float16` argument, or use another device for inference.
2024-06-23 12:32:51,210 - mmcv - INFO - initialize PAFPN with init_cfg {'type': 'Xavier', 'layer': 'Conv2d', 'distribution': 'uniform'}
2024-06-23 12:32:51,211 - mmcv - INFO -
lateral_convs.0.conv.weight - torch.Size([16, 64, 1, 1]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:32:51,211 - mmcv - INFO -
lateral_convs.0.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:32:51,211 - mmcv - INFO -
lateral_convs.1.conv.weight - torch.Size([16, 120, 1, 1]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:32:51,211 - mmcv - INFO -
lateral_convs.1.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:32:51,211 - mmcv - INFO -
lateral_convs.2.conv.weight - torch.Size([16, 160, 1, 1]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:32:51,211 - mmcv - INFO -
lateral_convs.2.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:32:51,211 - mmcv - INFO -
fpn_convs.0.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:32:51,211 - mmcv - INFO -
fpn_convs.0.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:32:51,212 - mmcv - INFO -
fpn_convs.1.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:32:51,212 - mmcv - INFO -
fpn_convs.1.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:32:51,212 - mmcv - INFO -
fpn_convs.2.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:32:51,212 - mmcv - INFO -
fpn_convs.2.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:32:51,212 - mmcv - INFO -
downsample_convs.0.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:32:51,212 - mmcv - INFO -
downsample_convs.0.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:32:51,212 - mmcv - INFO -
downsample_convs.1.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:32:51,212 - mmcv - INFO -
downsample_convs.1.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:32:51,212 - mmcv - INFO -
pafpn_convs.0.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:32:51,213 - mmcv - INFO -
pafpn_convs.0.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:32:51,213 - mmcv - INFO -
pafpn_convs.1.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:32:51,213 - mmcv - INFO -
pafpn_convs.1.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

load checkpoint from local path: C:\Users\username\.cache\modelscope\hub\damo\cv_ddsar_face-detection_iclr23-damofd\pytorch_model.pt
2024-06-23 12:33:06,684 - mmcv - INFO - initialize PAFPN with init_cfg {'type': 'Xavier', 'layer': 'Conv2d', 'distribution': 'uniform'}
2024-06-23 12:33:06,685 - mmcv - INFO -
lateral_convs.0.conv.weight - torch.Size([16, 64, 1, 1]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:33:06,685 - mmcv - INFO -
lateral_convs.0.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:33:06,686 - mmcv - INFO -
lateral_convs.1.conv.weight - torch.Size([16, 120, 1, 1]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:33:06,686 - mmcv - INFO -
lateral_convs.1.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:33:06,686 - mmcv - INFO -
lateral_convs.2.conv.weight - torch.Size([16, 160, 1, 1]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:33:06,686 - mmcv - INFO -
lateral_convs.2.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:33:06,686 - mmcv - INFO -
fpn_convs.0.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:33:06,686 - mmcv - INFO -
fpn_convs.0.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:33:06,686 - mmcv - INFO -
fpn_convs.1.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:33:06,686 - mmcv - INFO -
fpn_convs.1.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:33:06,686 - mmcv - INFO -
fpn_convs.2.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:33:06,687 - mmcv - INFO -
fpn_convs.2.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:33:06,687 - mmcv - INFO -
downsample_convs.0.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:33:06,687 - mmcv - INFO -
downsample_convs.0.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:33:06,687 - mmcv - INFO -
downsample_convs.1.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:33:06,687 - mmcv - INFO -
downsample_convs.1.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:33:06,687 - mmcv - INFO -
pafpn_convs.0.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:33:06,687 - mmcv - INFO -
pafpn_convs.0.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:33:06,687 - mmcv - INFO -
pafpn_convs.1.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:33:06,687 - mmcv - INFO -
pafpn_convs.1.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

load checkpoint from local path: C:\Users\username\.cache\modelscope\hub\damo\cv_ddsar_face-detection_iclr23-damofd\pytorch_model.pt
2024-06-23 12:33:09,403 - mmcv - INFO - initialize PAFPN with init_cfg {'type': 'Xavier', 'layer': 'Conv2d', 'distribution': 'uniform'}
2024-06-23 12:33:09,404 - mmcv - INFO -
lateral_convs.0.conv.weight - torch.Size([16, 64, 1, 1]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:33:09,404 - mmcv - INFO -
lateral_convs.0.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:33:09,404 - mmcv - INFO -
lateral_convs.1.conv.weight - torch.Size([16, 120, 1, 1]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:33:09,404 - mmcv - INFO -
lateral_convs.1.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:33:09,404 - mmcv - INFO -
lateral_convs.2.conv.weight - torch.Size([16, 160, 1, 1]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:33:09,404 - mmcv - INFO -
lateral_convs.2.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:33:09,405 - mmcv - INFO -
fpn_convs.0.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:33:09,405 - mmcv - INFO -
fpn_convs.0.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:33:09,405 - mmcv - INFO -
fpn_convs.1.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:33:09,405 - mmcv - INFO -
fpn_convs.1.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:33:09,405 - mmcv - INFO -
fpn_convs.2.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:33:09,405 - mmcv - INFO -
fpn_convs.2.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:33:09,405 - mmcv - INFO -
downsample_convs.0.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:33:09,405 - mmcv - INFO -
downsample_convs.0.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:33:09,405 - mmcv - INFO -
downsample_convs.1.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:33:09,405 - mmcv - INFO -
downsample_convs.1.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:33:09,406 - mmcv - INFO -
pafpn_convs.0.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:33:09,406 - mmcv - INFO -
pafpn_convs.0.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:33:09,406 - mmcv - INFO -
pafpn_convs.1.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:33:09,406 - mmcv - INFO -
pafpn_convs.1.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

load checkpoint from local path: C:\Users\username\.cache\modelscope\hub\damo\cv_ddsar_face-detection_iclr23-damofd\pytorch_model.pt
Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 12.4k/12.4k [00:00<00:00, 1.44MB/s]
2024-06-23 12:33:33,956 - mmcv - INFO - initialize PAFPN with init_cfg {'type': 'Xavier', 'layer': 'Conv2d', 'distribution': 'uniform'}
2024-06-23 12:33:33,958 - mmcv - INFO -
lateral_convs.0.conv.weight - torch.Size([16, 64, 1, 1]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:33:33,958 - mmcv - INFO -
lateral_convs.0.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:33:33,958 - mmcv - INFO -
lateral_convs.1.conv.weight - torch.Size([16, 120, 1, 1]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:33:33,958 - mmcv - INFO -
lateral_convs.1.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:33:33,958 - mmcv - INFO -
lateral_convs.2.conv.weight - torch.Size([16, 160, 1, 1]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:33:33,958 - mmcv - INFO -
lateral_convs.2.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:33:33,958 - mmcv - INFO -
fpn_convs.0.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:33:33,959 - mmcv - INFO -
fpn_convs.0.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:33:33,959 - mmcv - INFO -
fpn_convs.1.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:33:33,959 - mmcv - INFO -
fpn_convs.1.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:33:33,959 - mmcv - INFO -
fpn_convs.2.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:33:33,959 - mmcv - INFO -
fpn_convs.2.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:33:33,959 - mmcv - INFO -
downsample_convs.0.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:33:33,959 - mmcv - INFO -
downsample_convs.0.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:33:33,959 - mmcv - INFO -
downsample_convs.1.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:33:33,959 - mmcv - INFO -
downsample_convs.1.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:33:33,959 - mmcv - INFO -
pafpn_convs.0.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:33:33,959 - mmcv - INFO -
pafpn_convs.0.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

2024-06-23 12:33:33,960 - mmcv - INFO -
pafpn_convs.1.conv.weight - torch.Size([16, 16, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0

2024-06-23 12:33:33,960 - mmcv - INFO -
pafpn_convs.1.conv.bias - torch.Size([16]):
The value is the same before and after calling `init_weights` of PAFPN

load checkpoint from local path: C:\Users\username\.cache\modelscope\hub\damo\cv_ddsar_face-detection_iclr23-damofd\pytorch_model.pt
Loading pipeline components...:   0%|                                                                                                                                                                | 0/6 [00:00<?, ?it/s]An error occurred while trying to fetch C:\Users\username\.cache\modelscope\hub\YorickHe\majicmixRealistic_v6\realistic\vae: Error no file named diffusion_pytorch_model.safetensors found in directory C:\Users\username\.cache\modelscope\hub\YorickHe\majicmixRealistic_v6\realistic\vae.
Defaulting to unsafe serialization. Pass `allow_pickle=False` to raise an error instead.
Loading pipeline components...:  17%|█████████████████████████▎                                                                                                                              | 1/6 [00:00<00:01,  3.78it/s]An error occurred while trying to fetch C:\Users\username\.cache\modelscope\hub\YorickHe\majicmixRealistic_v6\realistic\unet: Error no file named diffusion_pytorch_model.safetensors found in directory C:\Users\username\.cache\modelscope\hub\YorickHe\majicmixRealistic_v6\realistic\unet.
Defaulting to unsafe serialization. Pass `allow_pickle=False` to raise an error instead.
Loading pipeline components...: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6/6 [00:02<00:00,  2.43it/s]
You have disabled the safety checker for <class 'diffusers.pipelines.controlnet.pipeline_controlnet.StableDiffusionControlNetPipeline'> by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
Loading pipeline components...:   0%|                                                                                                                                                                | 0/6 [00:00<?, ?it/s]An error occurred while trying to fetch C:\Users\username\.cache\modelscope\hub\YorickHe\majicmixRealistic_v6\realistic\vae: Error no file named diffusion_pytorch_model.safetensors found in directory C:\Users\username\.cache\modelscope\hub\YorickHe\majicmixRealistic_v6\realistic\vae.
Defaulting to unsafe serialization. Pass `allow_pickle=False` to raise an error instead.
Loading pipeline components...:  17%|█████████████████████████▎                                                                                                                              | 1/6 [00:00<00:00,  6.48it/s]An error occurred while trying to fetch C:\Users\username\.cache\modelscope\hub\YorickHe\majicmixRealistic_v6\realistic\unet: Error no file named diffusion_pytorch_model.safetensors found in directory C:\Users\username\.cache\modelscope\hub\YorickHe\majicmixRealistic_v6\realistic\unet.
Defaulting to unsafe serialization. Pass `allow_pickle=False` to raise an error instead.
Loading pipeline components...: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6/6 [00:02<00:00,  2.95it/s]
You have disabled the safety checker for <class 'diffusers.pipelines.controlnet.pipeline_controlnet_inpaint.StableDiffusionControlNetInpaintPipeline'> by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
Pipelines loaded with `dtype=torch.float16` cannot run with `cpu` device. It is not recommended to move them to `cpu` as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support for`float16` operations on this device in PyTorch. Please, remove the `torch_dtype=torch.float16` argument, or use another device for inference.
Pipelines loaded with `dtype=torch.float16` cannot run with `cpu` device. It is not recommended to move them to `cpu` as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support for`float16` operations on this device in PyTorch. Please, remove the `torch_dtype=torch.float16` argument, or use another device for inference.
Pipelines loaded with `dtype=torch.float16` cannot run with `cpu` device. It is not recommended to move them to `cpu` as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support for`float16` operations on this device in PyTorch. Please, remove the `torch_dtype=torch.float16` argument, or use another device for inference.
Pipelines loaded with `dtype=torch.float16` cannot run with `cpu` device. It is not recommended to move them to `cpu` as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support for`float16` operations on this device in PyTorch. Please, remove the `torch_dtype=torch.float16` argument, or use another device for inference.
Pipelines loaded with `dtype=torch.float16` cannot run with `cpu` device. It is not recommended to move them to `cpu` as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support for`float16` operations on this device in PyTorch. Please, remove the `torch_dtype=torch.float16` argument, or use another device for inference.
Pipelines loaded with `dtype=torch.float16` cannot run with `cpu` device. It is not recommended to move them to `cpu` as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support for`float16` operations on this device in PyTorch. Please, remove the `torch_dtype=torch.float16` argument, or use another device for inference.
Pipelines loaded with `dtype=torch.float16` cannot run with `cpu` device. It is not recommended to move them to `cpu` as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support for`float16` operations on this device in PyTorch. Please, remove the `torch_dtype=torch.float16` argument, or use another device for inference.
Pipelines loaded with `dtype=torch.float16` cannot run with `cpu` device. It is not recommended to move them to `cpu` as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support for`float16` operations on this device in PyTorch. Please, remove the `torch_dtype=torch.float16` argument, or use another device for inference.
[['D:\\AI-Tools\\stable-diffusion-webui\\extensions\\facechain/inpaint_template\\00_20230125X0028177.jpg'], ['D:\\AI-Tools\\stable-diffusion-webui\\extensions\\facechain/inpaint_template\\00_20240126X0017139.jpg'], ['D:\\AI-Tools\\stable-diffusion-webui\\extensions\\facechain/inpaint_template\\00_20240126X0017503.jpg']]
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 331.4s (prepare environment: 19.5s, import torch: 5.7s, import gradio: 2.3s, setup paths: 14.0s, initialize shared: 0.8s, other imports: 1.8s, list SD models: 0.1s, load scripts: 7.5s, create ui: 279.1s, gradio launch: 0.6s).
Downloading VAEApprox model to: D:\AI-Tools\stable-diffusion-webui\models\VAE-approx\vaeapprox-sdxl.pt
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 209k/209k [00:00<00:00, 5.29MB/s]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:06<00:00,  2.94it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:05<00:00,  3.79it/s]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:05<00:00,  3.86it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:04<00:00,  4.30it/s]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:05<00:00,  3.90it/s]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:16<00:00,  1.67s/it]
Total progress: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 30/30 [00:36<00:00,  1.21s/it]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:05<00:00,  3.93it/s]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:16<00:00,  1.66s/it]
Total progress: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 30/30 [00:36<00:00,  1.20s/it]
2024-06-23 12:39:22,380 - EasyPhoto - Please choose a user id.█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 30/30 [00:36<00:00,  1.42s/it]
Cleanup completed.
2024-06-23 12:39:45,170 - EasyPhoto - D:\AI-Tools\stable-diffusion-webui\models\Stable-diffusion/Chilloutmix-Ni-pruned-fp16-fix.safetensors : Hash match
2024-06-23 12:39:49,156 - EasyPhoto - D:\AI-Tools\stable-diffusion-webui\models\ControlNet/control_v11p_sd15_canny.pth : Hash match
2024-06-23 12:39:52,897 - EasyPhoto - D:\AI-Tools\stable-diffusion-webui\models\ControlNet/control_sd15_random_color.pth : Hash match
2024-06-23 12:39:56,655 - EasyPhoto - D:\AI-Tools\stable-diffusion-webui\models\VAE/vae-ft-mse-840000-ema-pruned.ckpt : Hash match
2024-06-23 12:39:59,798 - EasyPhoto - D:\AI-Tools\stable-diffusion-webui\models\ControlNet/control_v11p_sd15_openpose.pth : Hash match
2024-06-23 12:40:03,600 - EasyPhoto - D:\AI-Tools\stable-diffusion-webui\models\ControlNet/control_v11f1e_sd15_tile.pth : Hash match
2024-06-23 12:40:07,337 - EasyPhoto - D:\AI-Tools\stable-diffusion-webui\models\Lora/FilmVelvia3.safetensors : Hash match
2024-06-23 12:40:10,522 - EasyPhoto - D:\AI-Tools\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator/downloads/openpose\body_pose_model.pth : Hash match
2024-06-23 12:40:14,403 - EasyPhoto - D:\AI-Tools\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator/downloads/openpose\facenet.pth : Hash match
2024-06-23 12:40:18,576 - EasyPhoto - D:\AI-Tools\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator/downloads/openpose\hand_pose_model.pth : Hash match
2024-06-23 12:40:22,390 - EasyPhoto - D:\AI-Tools\stable-diffusion-webui\extensions\sd-webui-EasyPhoto\models\face_skin.pth : Hash match
2024-06-23 12:40:25,505 - EasyPhoto - D:\AI-Tools\stable-diffusion-webui\extensions\sd-webui-EasyPhoto\models\face_landmarks.pth : Hash match
2024-06-23 12:40:29,290 - EasyPhoto - D:\AI-Tools\stable-diffusion-webui\extensions\sd-webui-EasyPhoto\models\makeup_transfer.pth : Hash match
2024-06-23 12:40:33,434 - EasyPhoto - D:\AI-Tools\stable-diffusion-webui\extensions\sd-webui-EasyPhoto\models\training_templates\1.jpg : Hash match
2024-06-23 12:40:37,534 - EasyPhoto - D:\AI-Tools\stable-diffusion-webui\extensions\sd-webui-EasyPhoto\models\training_templates\2.jpg : Hash match
2024-06-23 12:40:40,653 - EasyPhoto - D:\AI-Tools\stable-diffusion-webui\extensions\sd-webui-EasyPhoto\models\training_templates\3.jpg : Hash match
2024-06-23 12:40:44,283 - EasyPhoto - D:\AI-Tools\stable-diffusion-webui\extensions\sd-webui-EasyPhoto\models\training_templates\4.jpg : Hash match
Loading weights [59ffe2243a] from D:\AI-Tools\stable-diffusion-webui\models\Stable-diffusion\Chilloutmix-Ni-pruned-fp16-fix.safetensors
2024-06-23 12:40:50,693 - EasyPhoto - D:\AI-Tools\stable-diffusion-webui\models\ControlNet/ip-adapter-full-face_sd15.pth : Hash match
2024-06-23 12:40:53,809 - EasyPhoto - D:\AI-Tools\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator/downloads/clip_vision\clip_h.pth : Hash match
2024-06-23 12:40:53,823 - EasyPhoto - ControlNet unit number: 5
2024-06-23 12:40:53,823 - EasyPhoto - Display score is forced to be true when IP-Adapter Control is enabled.
Cleanup completed.
Traceback (most recent call last):
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\modelscope\models\builder.py", line 35, in build_model
    model = build_from_cfg(
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\modelscope\utils\registry.py", line 184, in build_from_cfg
    LazyImportModule.import_module(sig)
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\modelscope\utils\import_utils.py", line 463, in import_module
    importlib.import_module(module_name)
  File "C:\Users\username\miniconda3\envs\stable-diffusion-webui\lib\importlib\__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\modelscope\models\cv\image_face_fusion\image_face_fusion.py", line 36, in <module>
    class ImageFaceFusion(TorchModel):
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\modelscope\utils\registry.py", line 125, in _register
    self._register_module(
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\modelscope\utils\registry.py", line 75, in _register_module
    raise KeyError(f'{module_name} is already registered in '
KeyError: 'image-face-fusion is already registered in models[image-face-fusion]'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\modelscope\utils\registry.py", line 212, in build_from_cfg
    return obj_cls(**args)
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\modelscope\pipelines\cv\image_face_fusion_pipeline.py", line 43, in __init__
    super().__init__(model=model, **kwargs)
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\modelscope\pipelines\base.py", line 99, in __init__
    self.model = self.initiate_single_model(model)
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\modelscope\pipelines\base.py", line 53, in initiate_single_model
    return Model.from_pretrained(
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\modelscope\models\base\base_model.py", line 179, in from_pretrained
    model = build_model(model_cfg, task_name=task_name)
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\modelscope\models\builder.py", line 43, in build_model
    raise KeyError(e)
KeyError: KeyError('image-face-fusion is already registered in models[image-face-fusion]')

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
    result = await self.call_function(
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "C:\Users\username\miniconda3\envs\stable-diffusion-webui\lib\contextlib.py", line 79, in inner
    return func(*args, **kwds)
  File "C:\Users\username\miniconda3\envs\stable-diffusion-webui\lib\contextlib.py", line 79, in inner
    return func(*args, **kwds)
  File "D:\AI-Tools\stable-diffusion-webui\extensions\sd-webui-EasyPhoto\scripts\easyphoto_infer.py", line 760, in easyphoto_infer_forward
    image_face_fusion = pipeline(Tasks.image_face_fusion, model="damo/cv_unet-image-face-fusion_damo", model_revision="v1.3")
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\modelscope\pipelines\builder.py", line 163, in pipeline
    return build_pipeline(cfg, task_name=task)
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\modelscope\pipelines\builder.py", line 67, in build_pipeline
    return build_from_cfg(
  File "D:\AI-Tools\stable-diffusion-webui\venv\lib\site-packages\modelscope\utils\registry.py", line 215, in build_from_cfg
    raise type(e)(f'{obj_cls.__name__}: {e}')
KeyError: "ImageFaceFusionPipeline: KeyError('image-face-fusion is already registered in models[image-face-fusion]')"

Additional information

No response

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant