You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
No operator found for memory_efficient_attention_forward with inputs:
query : shape=(24, 4352, 1, 128) (torch.bfloat16)
key : shape=(24, 4352, 1, 128) (torch.bfloat16)
value : shape=(24, 4352, 1, 128) (torch.bfloat16)
attn_bias : <class 'NoneType'>
p : 0.0 decoderF is not supported because:
attn_bias type is <class 'NoneType'>
bf16 is only supported on A100+ GPUs [email protected] is not supported because:
requires device with capability > (8, 0) but your GPU has capability (7, 5) (too old)
bf16 is only supported on A100+ GPUs cutlassF is not supported because:
bf16 is only supported on A100+ GPUs smallkF is not supported because:
max(query.shape[-1] != value.shape[-1]) > 32
dtype=torch.bfloat16 (supported: {torch.float32})
bf16 is only supported on A100+ GPUs
unsupported embed per head: 128
The text was updated successfully, but these errors were encountered:
No operator found for
memory_efficient_attention_forward
with inputs:query : shape=(24, 4352, 1, 128) (torch.bfloat16)
key : shape=(24, 4352, 1, 128) (torch.bfloat16)
value : shape=(24, 4352, 1, 128) (torch.bfloat16)
attn_bias : <class 'NoneType'>
p : 0.0
decoderF
is not supported because:attn_bias type is <class 'NoneType'>
bf16 is only supported on A100+ GPUs
[email protected]
is not supported because:requires device with capability > (8, 0) but your GPU has capability (7, 5) (too old)
bf16 is only supported on A100+ GPUs
cutlassF
is not supported because:bf16 is only supported on A100+ GPUs
smallkF
is not supported because:max(query.shape[-1] != value.shape[-1]) > 32
dtype=torch.bfloat16 (supported: {torch.float32})
bf16 is only supported on A100+ GPUs
unsupported embed per head: 128
The text was updated successfully, but these errors were encountered: