You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
base model over 7/8b requires multiple gpu parallelism on my gpu, already tested instruct model e.g. qwen2_5-72b-instruct can be used for multi-DP evaluation with multiple gpu via openai method (--max-num-workers 8, the reason for using this method is that --hf-gpu-nums doesn't actually work). To summarize, how can base model use openai service method for ppl evaluation?
Will you implement it?
I would like to implement this feature and create a PR!
The text was updated successfully, but these errors were encountered:
GenerallyCovetous
changed the title
[Feature] Can the base model be served by openai and then evaluated for ppl, e.g. mmlu_ppl?
[Feature] Can base model start the service via openai for ppl evaluation?
Mar 19, 2025
Describe the feature
base model over 7/8b requires multiple gpu parallelism on my gpu, already tested instruct model e.g. qwen2_5-72b-instruct can be used for multi-DP evaluation with multiple gpu via openai method (--max-num-workers 8, the reason for using this method is that --hf-gpu-nums doesn't actually work). To summarize, how can base model use openai service method for ppl evaluation?
Will you implement it?
The text was updated successfully, but these errors were encountered: