-
Notifications
You must be signed in to change notification settings - Fork 392
Description
When I evaluated LLaVA-Video on the mlvu_test benchmark, the following error occurred:
Traceback (most recent call last):
File "/home/hunterj/gys/xxx/lmms-eval/lmms_eval/main.py", line 353, in cli_evaluate
results, samples = cli_evaluate_single(args)
File "/home/hunterj/gys/xxx/lmms-eval/lmms_eval/main.py", line 488, in cli_evaluate_single
results = evaluator.simple_evaluate(
File "/home/hunterj/gys/xxx/lmms-eval/lmms_eval/utils.py", line 536, in _wrapper
return fn(*args, **kwargs)
File "/home/hunterj/gys/xxx/lmms-eval/lmms_eval/evaluator.py", line 204, in simple_evaluate
task_dict = get_task_dict(tasks, task_manager, task_type)
File "/home/hunterj/gys/xxx/lmms-eval/lmms_eval/tasks/init.py", line 565, in get_task_dict
task_name_from_string_dict = task_manager.load_task_or_group(
File "/home/hunterj/gys/xxx/lmms-eval/lmms_eval/tasks/init.py", line 378, in load_task_or_group
all_loaded_tasks = dict(collections.ChainMap(*map(load_fn, task_list)))
File "/home/hunterj/gys/xxx/lmms-eval/lmms_eval/tasks/init.py", line 297, in _load_individual_task_or_group
return _load_task(task_config, task=name_or_config)
File "/home/hunterj/gys/xxx/lmms-eval/lmms_eval/tasks/init.py", line 267, in _load_task
task_object = TaskObj(config=config, model_name=self.model_name)
File "/home/hunterj/gys/xxx/lmms-eval/lmms_eval/api/task.py", line 755, in init
test_text = self.doc_to_text(test_doc)
File "/home/hunterj/gys/xxx/lmms-eval/lmms_eval/api/task.py", line 1321, in doc_to_text
else doc_to_text(
File "/home/hunterj/gys/xxx/lmms-eval/lmms_eval/tasks/mlvu/utils.py", line 66, in mlvu_doc_to_text
pre_prompt = lmms_eval_specific_kwargs.get("pre_prompt", "")
AttributeError: 'NoneType' object has no attribute 'get'
2025-10-09 09:38:08 | ERROR | main:cli_evaluate:375 - Error during evaluation: 'NoneType' object has no attribute 'get'. Please set--verbosity=DEBUG
to get more information.