Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] ValueError: Error from parse_expr with transformed code: '6' #584

Open
Leawnn opened this issue Feb 24, 2025 · 0 comments
Open

[BUG] ValueError: Error from parse_expr with transformed code: '6' #584

Leawnn opened this issue Feb 24, 2025 · 0 comments
Labels
bug Something isn't working

Comments

@Leawnn
Copy link

Leawnn commented Feb 24, 2025

when I use lighteval vllm to evaluate Math-500,get the error like this

Image

The script I'm using is provided below:

export VLLM_WORKER_MULTIPROC_METHOD=spawn
MODEL=Qwen_model/Qwen2.5-Math-1.5B-Instruct
MODEL_ARGS="pretrained=$MODEL,dtype=bfloat16,max_model_length=4096,gpu_memory_utilisation=0.8,tensor_parallel_size=2"
OUTPUT_DIR=data/evals/$MODEL
# AIME 2024
TASK=aime24
lighteval vllm $MODEL_ARGS "custom|$TASK|0|0" \
    --custom-tasks src/open_r1/evaluate.py \
    --use-chat-template \
    --output-dir data/evals/$TASK

# # MATH-500
TASK=math_500
lighteval vllm $MODEL_ARGS "custom|$TASK|0|0" \
    --custom-tasks src/open_r1/evaluate.py \
    --use-chat-template \
    --output-dir data/evals/$TASK

# GPQA Diamond
TASK=gpqa:diamond
lighteval vllm $MODEL_ARGS "custom|$TASK|0|0" \
    --custom-tasks src/open_r1/evaluate.py \
    --use-chat-template \
    --output-dir data/evals/$TASK

Thanks for looking into this issue.

@Leawnn Leawnn added the bug Something isn't working label Feb 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant