You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I encountered the issue described in the image, and I noticed that the script file “run_deepscaler_1.5b_8k” has already set “export VLLM_ATTENTION_BACKEND=XFORMERS”.
I checked the torch_dtype of the loaded models and found that the Actor uses float32, while the reference model uses bfloat16, which likely caused the aforementioned problem. May I ask why they were not configured to use the same precision?
The text was updated successfully, but these errors were encountered:
Hello,
The text was updated successfully, but these errors were encountered: