Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AssertionError: Timestep embedding is needed for a time-aware attention processor. #16

Open
Greatlid opened this issue Dec 16, 2024 · 2 comments

Comments

@Greatlid
Copy link

Great work! However, when I fine-tuned on my dataset, I encountered the error: AssertionError: Timestep embedding is needed for a time-aware attention processor. Upon checking the training code, I found that it seems the input of unet does not include the timestep embedding, which caused this error. How can I resolve this issue?

@Greatlid
Copy link
Author

Great work! However, when I fine-tuned on my dataset, I encountered the error: AssertionError: Timestep embedding is needed for a time-aware attention processor. Upon checking the training code, I found that it seems the input of unet does not include the timestep embedding, which caused this error. How can I resolve this issue?

A more detailed description is as follows:
File "/home/pl/anaconda3/envs/instantir/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/pl/anaconda3/envs/instantir/lib/python3.9/site-packages/diffusers/models/attention.py", line 366, in forward
attn_output = self.attn2(
File "/home/pl/anaconda3/envs/instantir/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/pl/anaconda3/envs/instantir/lib/python3.9/site-packages/diffusers/models/attention_processor.py", line 549, in forward
return self.processor(
File "/home/pl/HSITask/HSISythesis/InstantIR/module/ip_adapter/attention_processor.py", line 1102, in call
assert temb is not None, "Timestep embedding is needed for a time-aware attention processor."
AssertionError: Timestep embedding is needed for a time-aware attention processor.
Steps: 0%| | 0/2610800 [00:01<?, ?it/s]

@Greatlid
Copy link
Author

Well, I added the following codes and modified the inputs of the UNet, the code executes successfully, but I'm uncertain about its correctness.

code:
cross_attention_t_emb = unet.get_time_embed(sample=noisy_model_input,
timestep=start_timesteps)
timestep_cond = None
cross_attention_emb = unet.time_embedding(cross_attention_t_emb, timestep_cond)
cross_attention_aug_emb = unet.get_aug_embed(
emb=cross_attention_emb,
encoder_hidden_states=prompt_embeds,
added_cond_kwargs=uncond_encoded_text
)
cross_attention_emb = cross_attention_emb + cross_attention_aug_emb if cross_attention_aug_emb is not None else cross_attention_emb
if unet.time_embed_act is not None:
cross_attention_emb = unet.time_embed_act(cross_attention_emb)
current_cross_attention_kwargs = {"temb": cross_attention_emb}

noise_pred = unet(
noisy_model_input,
start_timesteps,
encoder_hidden_states=uncond_prompt_embeds,
added_cond_kwargs=uncond_encoded_text,
cross_attention_kwargs=current_cross_attention_kwargs,
).sample

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant