Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AnimateLCM: Assertion failed, number < max_number in function 'icvExtractPattern' #19

Closed
aiac opened this issue Apr 23, 2024 · 3 comments
Assignees

Comments

@aiac
Copy link

aiac commented Apr 23, 2024

Describe the bug

[ERROR:[email protected]] global cap.cpp:643 open VIDEOIO(CV_IMAGES): raised OpenCV exception:

OpenCV(4.9.0) /io/opencv/modules/videoio/src/cap_images.cpp:274: error: (-215:Assertion failed) number < max_number in function 'icvExtractPattern'

To Reproduce
Steps to reproduce the behavior:

  1. Build image with CUDA support
  2. Run container with docker run -it --gpus=all --restart=always -p 7860:7860 -v //e/AI/biniou/outputs:/home/biniou/biniou/outputs -v //e/AI/biniou/models:/home/biniou/biniou/models -v //e/AI/biniou/cache//e//home/biniou/.cache/huggingface -v //e/AI/biniou/gfpgan:/home/biniou/biniou/gfpgan biniou:latest
  3. Choose Video -> AnimateLCM (didn't change any settings)
  4. Type in prompt & hit Execute
  5. Wait until error occure

Expected behavior
Video output.

Console log

>>>[AnimateLCM 📼 ]: starting module
Keyword arguments {'safety_checker': StableDiffusionSafetyChecker(
  (vision_model): CLIPVisionModel(
    (vision_model): CLIPVisionTransformer(
      (embeddings): CLIPVisionEmbeddings(
        (patch_embedding): Conv2d(3, 1024, kernel_size=(14, 14), stride=(14, 14), bias=False)
        (position_embedding): Embedding(257, 1024)
      )
      (pre_layrnorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
      (encoder): CLIPEncoder(
        (layers): ModuleList(
          (0-23): 24 x CLIPEncoderLayer(
            (self_attn): CLIPAttention(
              (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
              (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
              (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
              (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
            )
            (layer_norm1): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
            (mlp): CLIPMLP(
              (activation_fn): QuickGELUActivation()
              (fc1): Linear(in_features=1024, out_features=4096, bias=True)
              (fc2): Linear(in_features=4096, out_features=1024, bias=True)
            )
            (layer_norm2): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          )
        )
      )
      (post_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
    )
  )
  (visual_projection): Linear(in_features=1024, out_features=768, bias=False)
)} are not expected by AnimateDiffPipeline and will be ignored.
The config attributes {'center_input_sample': False, 'flip_sin_to_cos': True, 'freq_shift': 0, 'mid_block_type': 'UNetMidBlock2DCrossAttn', 'only_cross_attention': False, 'attention_head_dim': 8, 'dual_cross_attention': False, 'class_embed_type': None, 'addition_embed_type': None, 
'num_class_embeds': None, 'upcast_attention': False, 'resnet_time_scale_shift': 'default', 'resnet_skip_time_act': False, 'resnet_out_scale_factor': 1.0, 'time_embedding_type': 'positional', 'time_embedding_dim': None, 'time_embedding_act_fn': None, 'timestep_post_act': None, 'conv_in_kernel': 3, 'conv_out_kernel': 3, 'projection_class_embeddings_input_dim': None, 'class_embeddings_concat': False, 'mid_block_only_cross_attention': None, 'cross_attention_norm': None, 'addition_embed_type_num_heads': 64} were passed to UNetMotionModel, but are not expected and will be ignored. Please verify your config.json configuration file.
[ERROR:[email protected]] global cap.cpp:643 open VIDEOIO(CV_IMAGES): raised OpenCV exception:

OpenCV(4.9.0) /io/opencv/modules/videoio/src/cap_images.cpp:274: error: (-215:Assertion failed) number < max_number in function 'icvExtractPattern'


>>>[AnimateLCM 📼 ]: generated 1 batch(es) of 1
>>>[AnimateLCM 📼 ]: Settings : Model=emilianJR/epiCRealism | Adapter=wangfuyun/AnimateLCM | Sampler=LCM | Steps=4 | CFG scale=2 | Video length=16 frames | Size=512x512 | GFPGAN=True | Token merging=0 | nsfw_filter=True | Prompt=breathing heart | Negative prompt= | Seed List=9893499623
>>>[Text2Video-Zero 📼 ]: leaving module
>>>[biniou 🧠]: Generation finished in 00:01:33 (93 seconds)
Traceback (most recent call last):
  File "/home/biniou/biniou/env/lib/python3.11/site-packages/gradio/queueing.py", line 407, in call_prediction
    output = await route_utils.call_process_api(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/biniou/biniou/env/lib/python3.11/site-packages/gradio/route_utils.py", line 226, in call_process_api
    output = await app.get_blocks().process_api(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/biniou/biniou/env/lib/python3.11/site-packages/gradio/blocks.py", line 1559, in process_api
    data = self.postprocess_data(fn_index, result["prediction"], state)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/biniou/biniou/env/lib/python3.11/site-packages/gradio/blocks.py", line 1447, in postprocess_data
    prediction_value = block.postprocess(prediction_value)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/biniou/biniou/env/lib/python3.11/site-packages/gradio/components/video.py", line 273, in postprocess
    processed_files = (self._format_video(y), None)
                       ^^^^^^^^^^^^^^^^^^^^^
  File "/home/biniou/biniou/env/lib/python3.11/site-packages/gradio/components/video.py", line 350, in _format_video
    video = self.make_temp_copy_if_needed(video)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/biniou/biniou/env/lib/python3.11/site-packages/gradio/components/base.py", line 233, in make_temp_copy_if_needed
    temp_dir = self.hash_file(file_path)
               ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/biniou/biniou/env/lib/python3.11/site-packages/gradio/components/base.py", line 197, in hash_file
    with open(file_path, "rb") as f:
         ^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: 'outputs/1713882195_259951_9893499623.mp4'

Hardware

  • RAM size: 32GB
  • Processor physical core number or vcore number : 24
  • GPU : Nvidia Geforce RTX 3080

Desktop

  • latest NVIDIA Studio drivers
  • Windows 10 22H2
  • Docker for Windows with WSL2 / CUDA support
@Woolverine94
Copy link
Owner

Hello @aiac,

Thanks for your feedback and your interest in biniou.

I can't reproduce, neither using a linux basic (CPU) install, nor with a docker (without CUDA) install.

Can you confirm that :

  • The file outputs/1713882195_259951_9893499623.mp4 has not been created ?
  • You can generate others contents (like images) that appears in the outputs directory ?

I suspect that this could be linked to this issue (which is supposed to be "fixed") in your specific context (Windows / CUDA / Docker).

If it's the case, it should be pretty easy to solve, as it's only related to the name of the output file generated. I will make some tests later today and try to propose a commit asap.

@Woolverine94 Woolverine94 self-assigned this Apr 24, 2024
Woolverine94 added a commit that referenced this issue Apr 24, 2024
@Woolverine94
Copy link
Owner

@aiac,

Commit bcbc1f9 introduce a possible workaround for the behavior your encountered, by using a temporary name during video creation step, then renaming the file to its final name.

Please confirm if it has any kind of effect on your issue.

@Woolverine94 Woolverine94 pinned this issue Apr 25, 2024
@Woolverine94 Woolverine94 unpinned this issue Apr 25, 2024
@Woolverine94
Copy link
Owner

Closing this issue, as it's inactive for more than a month. Please don't hesitate to re-open it if required.

Also, by re-reading your initial log, I realize that there was a typo in your command line :
-v //e/AI/biniou/cache//e//home/biniou/.cache/huggingface
should be
-v //e/AI/biniou/cache:/home/biniou/.cache/huggingface

It's unlikely related to this issue, but you should correct it to save some space.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants