Skip to content

Fixes the inconsistency of the optionality of attention_mask #37153

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 8 commits into from
Apr 1, 2025

Conversation

Zephyr271828
Copy link
Contributor

@Zephyr271828 Zephyr271828 commented Mar 31, 2025

What does this PR do?

This PR fixes Issue #37046.
This issue discusses the inconsistency of the optionality of the parameter "attention_mask" in different functions. Specifically, the type of attention_mask in LlamaForCausalLM, LlamaModel, and LlamaDecoderLayer are Optional[torch.Tensor] = None. In LlamaAttention (a class which wraps the attention interface), the type is Optional[torch.Tensor] because attention_mask will be passed to it from LlamaModel, whether the type of attention_mask is torch.Tensor or NoneType. Therefore, attention_mask will be passed to the attention_interface, no matter it's a tensor or NoneType.
The key problem lies in the type specification in flash_attention. In flash_attention_forward, the type of attention_mask is still Optional[torch.Tensor]whereas in _flash_attention_forward, a function called in flash_attention_forward, the type is torch.Tensor, which is unreasonable because:

  1. it's inconsistent with the specification in flash_attention_forward.
  2. _flash_attention_forward is usable even if attention_mask is None.

Therefore, this PR fixes the specification in _flash_attention_forward as well as the docstring to address the aforementioned issue.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@github-actions github-actions bot marked this pull request as draft March 31, 2025 17:45
Copy link
Contributor

Hi 👋, thank you for opening this pull request! The pull request is converted to draft by default. The CI will be paused while the PR is in draft mode. When it is ready for review, please click the Ready for review button (at the bottom of the PR page). This will assign reviewers and trigger CI.

@Zephyr271828 Zephyr271828 changed the title Yufeng xu Fixes the inconsistency of the optionality of attention_mask Mar 31, 2025
@Zephyr271828 Zephyr271828 marked this pull request as ready for review March 31, 2025 17:51
@@ -280,7 +280,7 @@ def _flash_attention_forward(
query_states: torch.Tensor,
key_states: torch.Tensor,
value_states: torch.Tensor,
attention_mask: torch.Tensor,
attention_mask: Optional[torch.Tensor],

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should then set it to None by default

Copy link
Contributor Author

@Zephyr271828 Zephyr271828 Mar 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi! @Godofnothing Yes! I noticed this is different from your proposed solution in issue #37046. My point is that attention_mask will be passed from flash_attention_forward to _flash_attention_forward, though the type can be either torch.Tensor or NoneType, so I think it's not necessary to explicit set the default value as None here.

A similar case is in modeling_llama.py, I noticed LlamaModel uses Option[torch.Tensor] = None for attention_mask where as LlamaAttention simply uses Optional[torch.Tensor], because LlamaModel is responsible to pass the value of attention_mask to LlamaAttention. Technically I could modify the types of attention_mask in all involved functions to Optional[torch.Tensor] = None, but I did not do that for the sake of simplicity.

Please correct me if I'm wrong.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, that's fine, thanks for your efforts!

Copy link
Member

@Rocketknight1 Rocketknight1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! I think Optional[torch.Tensor] without a default is okay here.

@Rocketknight1 Rocketknight1 merged commit bf41e54 into huggingface:main Apr 1, 2025
18 checks passed
dmdaksh pushed a commit to dmdaksh/transformers that referenced this pull request Apr 2, 2025
…face#37153)

* debugging issue 36758

* debugging issue 36758

* debugging issue 36758

* updated attn_mask type specification in _flash_attention_forward

* removed pdb

* added a blank line

* removed indentation
zucchini-nlp pushed a commit to BakerBunker/transformers that referenced this pull request Apr 2, 2025
…face#37153)

* debugging issue 36758

* debugging issue 36758

* debugging issue 36758

* updated attn_mask type specification in _flash_attention_forward

* removed pdb

* added a blank line

* removed indentation
zucchini-nlp pushed a commit to zucchini-nlp/transformers that referenced this pull request May 14, 2025
…face#37153)

* debugging issue 36758

* debugging issue 36758

* debugging issue 36758

* updated attn_mask type specification in _flash_attention_forward

* removed pdb

* added a blank line

* removed indentation
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants