Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flux: mat1 and mat2 shapes cannot be multiplied #159

Closed
LiJT opened this issue Nov 13, 2024 · 4 comments
Closed

Flux: mat1 and mat2 shapes cannot be multiplied #159

LiJT opened this issue Nov 13, 2024 · 4 comments

Comments

@LiJT
Copy link

LiJT commented Nov 13, 2024

Hello TinyTerra!

I really like you node a lot! After try using Flux and tonyKsampler, this error appear, May I ask why this happened?
Problem
Workflow:
TinyBug.json

Log:

got prompt
model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16
model_type FLUX
Requested to load Flux
Loading 1 new model
loaded completely 0.0 11350.048889160156 True
!!! Exception during processing !!! mat1 and mat2 shapes cannot be multiplied (4096x16 and 64x3072)
Traceback (most recent call last):
  File "E:\ComfyUI-aki-v1.3\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "E:\ComfyUI-aki-v1.3\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "E:\ComfyUI-aki-v1.3\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "E:\ComfyUI-aki-v1.3\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "E:\ComfyUI-aki-v1.3\custom_nodes\ComfyUI_tinyterraNodes\ttNpy\tinyterraNodes.py", line 2472, in sample
    return process_sample_state(model, input_image_override, clip, latent, vae, seed, positive, negative, lora_name, lora_strength, lora_strength,
  File "E:\ComfyUI-aki-v1.3\custom_nodes\ComfyUI_tinyterraNodes\ttNpy\tinyterraNodes.py", line 2424, in process_sample_state
    samples = sampler.common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, samples, denoise=denoise, preview_latent=preview_latent, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, disable_noise=disable_noise)
  File "E:\ComfyUI-aki-v1.3\custom_nodes\ComfyUI_tinyterraNodes\ttNpy\tinyterraNodes.py", line 426, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
  File "E:\ComfyUI-aki-v1.3\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 22, in informative_sample
    raise e
  File "E:\ComfyUI-aki-v1.3\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
    return original_sample(*args, **kwargs)  # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
  File "E:\ComfyUI-aki-v1.3\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 420, in motion_sample
    return orig_comfy_sample(model, noise, *args, **kwargs)
  File "E:\ComfyUI-aki-v1.3\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\sampling.py", line 116, in acn_sample
    return orig_comfy_sample(model, *args, **kwargs)
  File "E:\ComfyUI-aki-v1.3\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 117, in uncond_multiplier_check_cn_sample
    return orig_comfy_sample(model, *args, **kwargs)
  File "E:\ComfyUI-aki-v1.3\comfy\sample.py", line 43, in sample
    samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "E:\ComfyUI-aki-v1.3\custom_nodes\ComfyUI-TiledDiffusion\utils.py", line 51, in KSampler_sample
    return orig_fn(*args, **kwargs)
  File "E:\ComfyUI-aki-v1.3\comfy\samplers.py", line 855, in sample
    return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "E:\ComfyUI-aki-v1.3\comfy\samplers.py", line 753, in sample
    return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
  File "E:\ComfyUI-aki-v1.3\comfy\samplers.py", line 740, in sample
    output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
  File "E:\ComfyUI-aki-v1.3\comfy\samplers.py", line 719, in inner_sample
    samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
  File "E:\ComfyUI-aki-v1.3\custom_nodes\ComfyUI-TiledDiffusion\utils.py", line 34, in KSAMPLER_sample
    return orig_fn(*args, **kwargs)
  File "E:\ComfyUI-aki-v1.3\comfy\samplers.py", line 624, in sample
    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
  File "E:\ComfyUI-aki-v1.3\python\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "E:\ComfyUI-aki-v1.3\comfy\k_diffusion\sampling.py", line 155, in sample_euler
    denoised = model(x, sigma_hat * s_in, **extra_args)
  File "E:\ComfyUI-aki-v1.3\comfy\samplers.py", line 299, in __call__
    out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
  File "E:\ComfyUI-aki-v1.3\comfy\samplers.py", line 706, in __call__
    return self.predict_noise(*args, **kwargs)
  File "E:\ComfyUI-aki-v1.3\comfy\samplers.py", line 709, in predict_noise
    return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
  File "E:\ComfyUI-aki-v1.3\comfy\samplers.py", line 279, in sampling_function
    out = calc_cond_batch(model, conds, x, timestep, model_options)
  File "E:\ComfyUI-aki-v1.3\comfy\samplers.py", line 228, in calc_cond_batch
    output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
  File "E:\ComfyUI-aki-v1.3\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 69, in apply_model_uncond_cleanup_wrapper
    return orig_apply_model(self, *args, **kwargs)
  File "E:\ComfyUI-aki-v1.3\comfy\model_base.py", line 144, in apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
  File "E:\ComfyUI-aki-v1.3\python\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "E:\ComfyUI-aki-v1.3\python\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\ComfyUI-aki-v1.3\comfy\ldm\flux\model.py", line 181, in forward
    out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control, transformer_options)
  File "E:\ComfyUI-aki-v1.3\comfy\ldm\flux\model.py", line 106, in forward_orig
    img = self.img_in(img)
  File "E:\ComfyUI-aki-v1.3\python\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "E:\ComfyUI-aki-v1.3\python\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\ComfyUI-aki-v1.3\comfy\ops.py", line 68, in forward
    return self.forward_comfy_cast_weights(*args, **kwargs)
  File "E:\ComfyUI-aki-v1.3\comfy\ops.py", line 64, in forward_comfy_cast_weights
    return torch.nn.functional.linear(input, weight, bias)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (4096x16 and 64x3072)
@meowkraft
Copy link

I am having this issue as well.

@felrickson
Copy link

Also having the same issue

@TinyTerra
Copy link
Owner

TinyTerra commented Feb 6, 2025

Apologies for the delay on this.

The error is occurring because you are using a standard empty latent instead of an SD3 empty latent.

The ttN loaders automatically pick up the model type and switch in the background, but currently don't have support for the 'standard' (clip attention, weight_dtype, etc) flux settings built in. If there is interest for a tinyFluxLoader node let me know and I'd be happy to work on adding it.

a slightly cut down version of your example workflow that doesn't error out:

TinyBugfix.json

@LiJT
Copy link
Author

LiJT commented Feb 7, 2025

Apologies for the delay on this.

The error is occurring because you are using a standard empty latent instead of an SD3 empty latent.

The ttN loaders automatically pick up the model type and switch in the background, but currently don't have support for the 'standard' (clip attention, weight_dtype, etc) flux settings built in. If there is interest for a tinyFluxLoader node let me know and I'd be happy to work on adding it.

a slightly cut down version of your example workflow that doesn't error out:

TinyBugfix.json

Thank you so much Tiny!!!!!!!!!!!!!

@LiJT LiJT closed this as completed Feb 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants