You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Apr 24, 2025. It is now read-only.
position_embeddings: Optional[Tuple[torch.Tensor, torch.Tensor]] =None, # will become mandatory in Transformers v4.45
172
173
):
173
174
"""Torch module forward method.
174
175
@@ -180,6 +181,7 @@ def forward(
180
181
output_attentions (Optional[bool], optional): Whether or not to return the attentions tensors of all attention layers.. Defaults to False.
181
182
use_cache (Optional[bool], optional): If set to `True`, `past_key_values` key value states are returned. Defaults to False.
182
183
cache_position (Optional[torch.LongTensor], optional): Cache position useful for static cache applications . Defaults to None.
184
+
position_embeddings (Optional[Tuple[torch.Tensor, torch.Tensor]], optional): If set to a tuple, it means the `sin` and `cos` are uniformly calculated by the outer `LlamaModel` and passed in. Defaults to None.
0 commit comments