Skip to content

Commit f0b1ee2

Browse files
authored
Merge pull request #206 from EvolvingLMMs-Lab/patch/fix_kwargs
fix: update from previous model_specific_prompt to current lmms_eval_kwargs to avoid warnings
2 parents c2f73de + 22ed307 commit f0b1ee2

File tree

17 files changed

+21
-19
lines changed

17 files changed

+21
-19
lines changed

docs/task_guide.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ metric_list:
4040
- metric: mme_cognition_score
4141
aggregation: !function utils.mme_aggregate_results
4242
higher_is_better: true
43-
model_specific_prompt_kwargs:
43+
lmms_eval_specific_kwargs:
4444
default:
4545
pre_prompt: ""
4646
post_prompt: "\nAnswer the question using a single word or phrase."
@@ -52,7 +52,7 @@ metadata:
5252
```
5353
5454
You can pay special attention to the `process_results` and `metric_list` fields, which are used to define how the model output is post-processed and scored.
55-
Also, the `model_specific_prompt_kwargs` field is used to define model-specific prompt configurations. The default is set to follow Llava.
55+
Also, the `lmms_eval_specific_kwargs` field is used to define model-specific prompt configurations. The default is set to follow Llava.
5656

5757
PPL-based tasks:
5858
- Seedbench (`lmms_eval/tasks/seedbench/seedbench_ppl.yaml`)

lmms_eval/tasks/ai2d/ai2d_lite.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ doc_to_visual: !function utils.ai2d_doc_to_visual
99
doc_to_text: !function utils.ai2d_doc_to_text
1010
doc_to_target: !function utils.ai2d_doc_to_target
1111

12-
model_specific_prompt_kwargs:
12+
lmms_eval_specific_kwargs:
1313
default:
1414
prompt_format: mcq
1515
pre_prompt: ""

lmms_eval/tasks/chartqa/chartqa_lite.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ metric_list:
2525
higher_is_better: true
2626
metadata:
2727
- version: 0.0
28-
model_specific_prompt_kwargs:
28+
lmms_eval_specific_kwargs:
2929
default:
3030
pre_prompt: ""
3131
post_prompt: "\nAnswer the question with a single word."

lmms_eval/tasks/docvqa/docvqa_val_lite.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ generation_kwargs:
1616
max_new_tokens: 32
1717
temperature: 0
1818
do_sample: False
19-
model_specific_prompt_kwargs:
19+
lmms_eval_specific_kwargs:
2020
default:
2121
pre_prompt: ""
2222
post_prompt: "\nAnswer the question using a single word or phrase."

lmms_eval/tasks/gqa/gqa_lite.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ metric_list:
2323
metadata:
2424
- version: 0.0
2525

26-
model_specific_prompt_kwargs:
26+
lmms_eval_specific_kwargs:
2727
default:
2828
pre_prompt: ""
2929
post_prompt: "\nAnswer the question using a single word or phrase."

lmms_eval/tasks/infovqa/infovqa_val_lite.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ generation_kwargs:
1616
max_new_tokens: 32
1717
temperature: 0
1818
do_sample: False
19-
model_specific_prompt_kwargs:
19+
lmms_eval_specific_kwargs:
2020
default:
2121
pre_prompt: ""
2222
post_prompt: "\nAnswer the question using a single word or phrase."

lmms_eval/tasks/mirb/mirb.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ doc_to_text: !function utils.mirb_doc_to_text
1010
doc_to_target: !function utils.mirb_doc_to_target
1111
process_results: !function utils.mirb_process_results
1212

13-
model_specific_prompt_kwargs:
13+
lmms_eval_specific_kwargs:
1414
default:
1515
pre_prompt: ""
1616
post_prompt: ""

lmms_eval/tasks/mirb/utils.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -24,11 +24,11 @@ def get_task_instruction(dataset):
2424
return instr
2525

2626

27-
def mirb_doc_to_text(doc, model_specific_prompt_kwargs=None):
27+
def mirb_doc_to_text(doc, lmms_eval_specific_kwargs=None):
2828
subset, question = doc["subset"], doc["questions"]
2929
task_instruction = get_task_instruction(subset)
30-
post_prompt = model_specific_prompt_kwargs["post_prompt"]
31-
pre_prompt = model_specific_prompt_kwargs["pre_prompt"]
30+
post_prompt = lmms_eval_specific_kwargs["post_prompt"]
31+
pre_prompt = lmms_eval_specific_kwargs["pre_prompt"]
3232
return f"{pre_prompt}{task_instruction}{question}{post_prompt}"
3333

3434

lmms_eval/tasks/mmbench/mmbench_cn_dev_lite.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ generation_kwargs:
2222
num_beams: 1
2323
do_sample: false
2424
process_results: !function cn_utils.mmbench_process_results
25-
model_specific_prompt_kwargs:
25+
lmms_eval_specific_kwargs:
2626
default:
2727
pre_prompt: ""
2828
post_prompt: "\n请直接使用所提供的选项字母作为答案回答。"

lmms_eval/tasks/mmbench/mmbench_en_dev_lite.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ dataset_name: mmbench_en_dev
55
dataset_kwargs:
66
token: True
77
doc_to_target: "answer"
8-
model_specific_prompt_kwargs:
8+
lmms_eval_specific_kwargs:
99
default:
1010
pre_prompt: ""
1111
post_prompt: "\nAnswer with the option's letter from the given choices directly."

0 commit comments

Comments
 (0)