Skip to content

Commit

Permalink
[README] near public (#63)
Browse files Browse the repository at this point in the history
* Refactor logging in lmms_eval package

* Refactor variable names in lmms_eval package

* Update README.md with new features and installation instructions

* Update supported models and datasets

* Delete otter.py file

* Fix capitalization in README.md

* Update image sizes and add new features

* Refactor README.md to improve readability and add new features

* Add description for lmms-eval in README.md

* Update accelerator support in README.md

* Update lmms-eval README with improved description and additional features

* Update README.md with improved task grouping description

* change `Otter-AI/MME` to `lmms-lab/MME`

* Update README.md

* Update README.md

* Remove unused code in mme.yaml

* Squashed commit of the following:

commit 2a45079
Author: Zhang Peiyuan <[email protected]>
Date:   Thu Feb 29 13:40:02 2024 +0800

    Dev/py add models (#57)

    * add instructblip

    * minicpm_v

    * remove <image> from qwen-vl

    * speed up postprocessing

    * Optimize build context speed

    ---------

    Co-authored-by: Pu Fanyi <[email protected]>
    Co-authored-by: kcz358 <[email protected]>

commit 7bdab7a
Author: Pu Fanyi <[email protected]>
Date:   Wed Feb 28 14:49:07 2024 +0800

    Pufanyi/flickr30k refractor (#56)

    * refactor vizwizvqa task

    * Delete vqav2_test and vqav2_val YAML files

    * Refactor vqav2_process_results functions

    * Add a pack for vqav2

    * refactor okvqa

    * roll back vizwiz_vqa

    * Fix exact_match calculation in ok_vqa_process_results

    * Update OKVQA dataset name in readme

    * add model_specific_prompt_kwargs

    * add model_specific_prompt_kwargs to vizwiz_vqa

    * add model_specific_prompt_kwargs for vqav2

    * lint

    * fix a small bug for eval_logger

    * Refactor make_table function to display points as "  -  " if value is None

    * Merge commit '90f42f0876a4914c5ac0d213b9dffbfb4797ff62'

    * Refactor ok_vqa_aggreate_submissions function

    * Merge commit '4afec3303a0a7ed27a8265565343bf2851b9e4c7'

    * Refactor VQA submission file saving

    * Update file utils

    * Merge commit 'c144b75f0c9145a625b2bbdef5123ed81e343a11'

    * Refactor file path handling and submission generation

    * OKVQA path

    * vizwizvqa file

    * pack cmmmu

    * fix a small metric bug for cmmmu

    * Add higher_is_better flag to submission metric

    * Add CMMMU dataset to README.md

    * Add logging and refactor submission file generation in docvqa utils.py

    * pack docvqa

    * add traceback to print detailed error

    * Refactor docvqa_test_aggregate_results to accept additional arguments

    * Add metric check in evaluator.py and update test.yaml and val.yaml

    * add common `EvalAIAnswerProcessor` for okvqa, textvqa, vizwizvqa and vqav2

    * merge textvqa

    * textvqa

    * Modify submission file generation for COCO test results

    * Update test result storage path

    * update coco cap file name

    * Update COCO 2017 Caption dataset name

    * ferret

    * Add Ferret dataset

    * Refactor hb_doc_to_text function to include model-specific prompts

    * Add IconQA and its subtasks

    * Refactor image list creation in doc_to_visual function

    * Add process_results function to default template

    * Update process_results function in iconqa utils.py

    * refactor flickr30k

    * change aggregation function

    * Fix formatting issues and update logging message

    * Fix llava can not handle only text question (no visuals)

    * Fix qwen can not handle no image question (no visuals)

    * Add fuyu prepare accelerator scripts

    * refactor mme

    * naming consistency

    * aggregation_submissions consistency

    * flickr30k naming consistency

    * remove submissions for mme

    * remove unused submission function

    * Refactor infovqa_test.yaml and infovqa_val.yaml

    * Refactor code for improved readability and maintainability

    * stvqa

    * remane sqa

    * Update lmms_eval textcaps files and utils.py

    * Update default prompt for text captions

    * Refactor textcaps_aggregation_result function

    * Add generate_submission_file function and update mathvista_aggregate_results signature

    * Update nocaps_test.yaml and nocaps_val.yaml

    * refractor internal_eval

    * Add internal evaluation datasets

    * pack multidocvqa

    * mmvet

    * Fix gpt eval timeout issue for hallubench, restore load from gpt to avoid re evaluating

    * Refractor llava wild

    * Refractor llava-bench-coco

    * Add JSON file generation for gpt evaluation details

    * mmmu

    * Remove MMBench English and Chinese tasks

    * Remove unnecessary return statement in mmbench_aggregate_test_results function

    * Fix distributed process group initialization

    * Update dataset paths and group names in mmbench test configs

    * Update import statements in cc_utils.py, cn_utils.py, and en_utils.py

    * Add torch module import

    * lint

    * Remove IconQA dataset from README.md

    * Add Multi-DocVQA and its submodules

    * Add new datasets and update task names

    * Refactor flickr_aggregation_result function to accept additional arguments

    * Add timeout kwargs in Accelerator constructor

    * Add encoding to be utf-8 for cmmmu

    * Fix llava try and catch, remove torch.distributed.init in main

    * Ds prepare script for llava

    ---------

    Co-authored-by: JvThunder <[email protected]>
    Co-authored-by: kcz358 <[email protected]>

commit d3dfd94
Author: Li Bo <[email protected]>
Date:   Tue Feb 27 22:52:07 2024 +0800

    [Wandb Logger] add models, and args to wandb tables. (#55)

    * Refactor logging in lmms_eval package

    * Refactor variable names in lmms_eval package

* add llava main in pyproject

* Update README.md

* Remove unnecessary dependencies and add specific version for llava_repr

* Add dependencies for llava_repr***

* Update README.md

* add some docs on models and command line commands

* remove some lines

* typo

* Update model_guide.md

* Update model_guide.md

* Update README.md

* Update README.md

* Update README.md

* Fix refcocog dataset path

* Record gpt response in eval info

* Resolve conflict

* Fix hallusionbench gpt json saving path

* Rename hallubench gpt output path

* Change remove image to check by type instead of check by names

* More robust check by type

* Add timeout to API requests

* Remove unnecessary img in data

* Forcing an empty commit.

* Testing

* Delete unnecessary things

* Fix error logging in get_chat_response function

* Fix seedbench2 image issue in doc_to_text

* Add conditional exclude for internal eval

* Squashed commit of the following:

commit 2fbeafc882c80242a10381abc67629d5d8b7071a
Author: kcz358 <[email protected]>
Date:   Sat Mar 2 03:49:36 2024 +0000

    Add conditional exclude for internal eval

commit f188052450bed2f3a30ab6f9a6f7eb844a64cb33
Merge: a3cae8e ffb9eb2
Author: kcz358 <[email protected]>
Date:   Sat Mar 2 03:24:29 2024 +0000

    Merge branch 'dev/readme' into kc/final_fix

commit baef5905505892593fe783beb18a2de20991d6af
Author: kcz358 <[email protected]>
Date:   Sat Mar 2 02:47:31 2024 +0000

    Fix seedbench2 image issue in doc_to_text

commit 11b46f3b701b79b361dd5175a263e4d89bd07fb5
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 15:32:49 2024 +0000

    Delete unnecessary things

commit 0982de2e7a2310429e51ec7828886fd49953f716
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 15:31:42 2024 +0000

    Testing

commit f840ed80f4ae467fff62b61844854a3a9e8ec8a5
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 15:29:30 2024 +0000

    Forcing an empty commit.

commit 80db78f600d07011188983637c94da84b9475fbf
Merge: 786f2b5 1700786
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 15:24:56 2024 +0000

    Merge branch 'kc/final_fix' into dev/readme

commit 676229de870b8d465cef08867cd272a4b696e630
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 15:24:20 2024 +0000

    Remove unnecessary img in data

commit d293b96fb3537fea85f10f216d762abf35e05e8d
Merge: 4240785 888c1c1
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 13:41:24 2024 +0000

    Merge branch 'kc/final_fix' into dev/readme

commit 01bbd010590d6b7f105525580209191a1d6d5232
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 13:40:51 2024 +0000

    More robust check by type

commit 66595ebc073ff9431f2400006196c0645be58ea4
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 13:00:57 2024 +0000

    Change remove image to check by type instead of check by names

commit 08c2ebad1532fd6c34ac04efb94a268db9862d4f
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 12:33:02 2024 +0000

    Rename hallubench gpt output path

commit aefbd3c6856584135e2dcbe13381db0e0780f063
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 09:32:52 2024 +0000

    Fix hallusionbench gpt json saving path

commit b9aebc3ff3b122d6d4a81bd2f28e86b2c390c505
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 08:51:13 2024 +0000

    Resolve conflict

commit c9daa91f2576de69af73c80e263afb085ecd8288
Merge: 9cf86fa 93534dc
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 08:37:21 2024 +0000

    Merge branch 'kc/final_fix' into dev/readme

commit b1c4c88b9b36e02e9ed738ff9217d98a5ef2117b
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 07:55:03 2024 +0000

    Record gpt response in eval info

commit b35bc4a6c8fd6b4b2a68bb3054878807b8b92281
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 07:49:01 2024 +0000

    Fix refcocog dataset path

commit 2a45079
Author: Zhang Peiyuan <[email protected]>
Date:   Thu Feb 29 13:40:02 2024 +0800

    Dev/py add models (#57)

    * add instructblip

    * minicpm_v

    * remove <image> from qwen-vl

    * speed up postprocessing

    * Optimize build context speed

    ---------

    Co-authored-by: Pu Fanyi <[email protected]>
    Co-authored-by: kcz358 <[email protected]>

commit 7bdab7a
Author: Pu Fanyi <[email protected]>
Date:   Wed Feb 28 14:49:07 2024 +0800

    Pufanyi/flickr30k refractor (#56)

    * refactor vizwizvqa task

    * Delete vqav2_test and vqav2_val YAML files

    * Refactor vqav2_process_results functions

    * Add a pack for vqav2

    * refactor okvqa

    * roll back vizwiz_vqa

    * Fix exact_match calculation in ok_vqa_process_results

    * Update OKVQA dataset name in readme

    * add model_specific_prompt_kwargs

    * add model_specific_prompt_kwargs to vizwiz_vqa

    * add model_specific_prompt_kwargs for vqav2

    * lint

    * fix a small bug for eval_logger

    * Refactor make_table function to display points as "  -  " if value is None

    * Merge commit '90f42f0876a4914c5ac0d213b9dffbfb4797ff62'

    * Refactor ok_vqa_aggreate_submissions function

    * Merge commit '4afec3303a0a7ed27a8265565343bf2851b9e4c7'

    * Refactor VQA submission file saving

    * Update file utils

    * Merge commit 'c144b75f0c9145a625b2bbdef5123ed81e343a11'

    * Refactor file path handling and submission generation

    * OKVQA path

    * vizwizvqa file

    * pack cmmmu

    * fix a small metric bug for cmmmu

    * Add higher_is_better flag to submission metric

    * Add CMMMU dataset to README.md

    * Add logging and refactor submission file generation in docvqa utils.py

    * pack docvqa

    * add traceback to print detailed error

    * Refactor docvqa_test_aggregate_results to accept additional arguments

    * Add metric check in evaluator.py and update test.yaml and val.yaml

    * add common `EvalAIAnswerProcessor` for okvqa, textvqa, vizwizvqa and vqav2

    * merge textvqa

    * textvqa

    * Modify submission file generation for COCO test results

    * Update test result storage path

    * update coco cap file name

    * Update COCO 2017 Caption dataset name

    * ferret

    * Add Ferret dataset

    * Refactor hb_doc_to_text function to include model-specific prompts

    * Add IconQA and its subtasks

    * Refactor image list creation in doc_to_visual function

    * Add process_results function to default template

    * Update process_results function in iconqa utils.py

    * refactor flickr30k

    * change aggregation function

    * Fix formatting issues and update logging message

    * Fix llava can not handle only text question (no visuals)

    * Fix qwen can not handle no image question (no visuals)

    * Add fuyu prepare accelerator scripts

    * refactor mme

    * naming consistency

    * aggregation_submissions consistency

    * flickr30k naming consistency

    * remove submissions for mme

    * remove unused submission function

    * Refactor infovqa_test.yaml and infovqa_val.yaml

    * Refactor code for improved readability and maintainability

    * stvqa

    * remane sqa

    * Update lmms_eval textcaps files and utils.py

    * Update default prompt for text captions

    * Refactor textcaps_aggregation_result function

    * Add generate_submission_file function and update mathvista_aggregate_results signature

    * Update nocaps_test.yaml and nocaps_val.yaml

    * refractor internal_eval

    * Add internal evaluation datasets

    * pack multidocvqa

    * mmvet

    * Fix gpt eval timeout issue for hallubench, restore load from gpt to avoid re evaluating

    * Refractor llava wild

    * Refractor llava-bench-coco

    * Add JSON file generation for gpt evaluation details

    * mmmu

    * Remove MMBench English and Chinese tasks

    * Remove unnecessary return statement in mmbench_aggregate_test_results function

    * Fix distributed process group initialization

    * Update dataset paths and group names in mmbench test configs

    * Update import statements in cc_utils.py, cn_utils.py, and en_utils.py

    * Add torch module import

    * lint

    * Remove IconQA dataset from README.md

    * Add Multi-DocVQA and its submodules

    * Add new datasets and update task names

    * Refactor flickr_aggregation_result function to accept additional arguments

    * Add timeout kwargs in Accelerator constructor

    * Add encoding to be utf-8 for cmmmu

    * Fix llava try and catch, remove torch.distributed.init in main

    * Ds prepare script for llava

    ---------

    Co-authored-by: JvThunder <[email protected]>
    Co-authored-by: kcz358 <[email protected]>

commit d3dfd94
Author: Li Bo <[email protected]>
Date:   Tue Feb 27 22:52:07 2024 +0800

    [Wandb Logger] add models, and args to wandb tables. (#55)

    * Refactor logging in lmms_eval package

    * Refactor variable names in lmms_eval package

* Fix small bugs in list_with_num

* Revise list_with_num model args

* Dev/readme rm rolling (#60)

* remove log_likelyhood_rolling

* Update time efficiency benchmark in README.md

* add task guide

---------

Co-authored-by: jzhang38 <[email protected]>
Co-authored-by: kcz358 <[email protected]>

* Remove unnecessary code and update dependencies

* Fix logging utils bug on wandb grouping

* Add reproduce envs

* Squashed commit of the following:

commit 556b12620379d79c9ed5ddba0856063b498f917c
Merge: 2475639 f89a736
Author: kcz358 <[email protected]>
Date:   Sun Mar 3 22:12:12 2024 +0800

    Merge branch 'main' into kc/final_fix

commit 9509a782c9e9824273cefb1dc9671c92b887697d
Author: kcz358 <[email protected]>
Date:   Sun Mar 3 22:11:04 2024 +0800

    Add reproduce envs

commit 0bff98b
Author: kcz358 <[email protected]>
Date:   Sun Mar 3 21:19:15 2024 +0800

    [Fix] wandb group logging missing columns (#61)

    * Refactor logging in lmms_eval package

    * Refactor variable names in lmms_eval package

    * Update README.md with new features and installation instructions

    * Update supported models and datasets

    * Delete otter.py file

    * Fix capitalization in README.md

    * Update image sizes and add new features

    * Refactor README.md to improve readability and add new features

    * Add description for lmms-eval in README.md

    * Update accelerator support in README.md

    * Update lmms-eval README with improved description and additional features

    * Update README.md with improved task grouping description

    * change `Otter-AI/MME` to `lmms-lab/MME`

    * Update README.md

    * Update README.md

    * Remove unused code in mme.yaml

    * Squashed commit of the following:

    commit 2a45079
    Author: Zhang Peiyuan <[email protected]>
    Date:   Thu Feb 29 13:40:02 2024 +0800

        Dev/py add models (#57)

        * add instructblip

        * minicpm_v

        * remove <image> from qwen-vl

        * speed up postprocessing

        * Optimize build context speed

        ---------

        Co-authored-by: Pu Fanyi <[email protected]>
        Co-authored-by: kcz358 <[email protected]>

    commit 7bdab7a
    Author: Pu Fanyi <[email protected]>
    Date:   Wed Feb 28 14:49:07 2024 +0800

        Pufanyi/flickr30k refractor (#56)

        * refactor vizwizvqa task

        * Delete vqav2_test and vqav2_val YAML files

        * Refactor vqav2_process_results functions

        * Add a pack for vqav2

        * refactor okvqa

        * roll back vizwiz_vqa

        * Fix exact_match calculation in ok_vqa_process_results

        * Update OKVQA dataset name in readme

        * add model_specific_prompt_kwargs

        * add model_specific_prompt_kwargs to vizwiz_vqa

        * add model_specific_prompt_kwargs for vqav2

        * lint

        * fix a small bug for eval_logger

        * Refactor make_table function to display points as "  -  " if value is None

        * Merge commit '90f42f0876a4914c5ac0d213b9dffbfb4797ff62'

        * Refactor ok_vqa_aggreate_submissions function

        * Merge commit '4afec3303a0a7ed27a8265565343bf2851b9e4c7'

        * Refactor VQA submission file saving

        * Update file utils

        * Merge commit 'c144b75f0c9145a625b2bbdef5123ed81e343a11'

        * Refactor file path handling and submission generation

        * OKVQA path

        * vizwizvqa file

        * pack cmmmu

        * fix a small metric bug for cmmmu

        * Add higher_is_better flag to submission metric

        * Add CMMMU dataset to README.md

        * Add logging and refactor submission file generation in docvqa utils.py

        * pack docvqa

        * add traceback to print detailed error

        * Refactor docvqa_test_aggregate_results to accept additional arguments

        * Add metric check in evaluator.py and update test.yaml and val.yaml

        * add common `EvalAIAnswerProcessor` for okvqa, textvqa, vizwizvqa and vqav2

        * merge textvqa

        * textvqa

        * Modify submission file generation for COCO test results

        * Update test result storage path

        * update coco cap file name

        * Update COCO 2017 Caption dataset name

        * ferret

        * Add Ferret dataset

        * Refactor hb_doc_to_text function to include model-specific prompts

        * Add IconQA and its subtasks

        * Refactor image list creation in doc_to_visual function

        * Add process_results function to default template

        * Update process_results function in iconqa utils.py

        * refactor flickr30k

        * change aggregation function

        * Fix formatting issues and update logging message

        * Fix llava can not handle only text question (no visuals)

        * Fix qwen can not handle no image question (no visuals)

        * Add fuyu prepare accelerator scripts

        * refactor mme

        * naming consistency

        * aggregation_submissions consistency

        * flickr30k naming consistency

        * remove submissions for mme

        * remove unused submission function

        * Refactor infovqa_test.yaml and infovqa_val.yaml

        * Refactor code for improved readability and maintainability

        * stvqa

        * remane sqa

        * Update lmms_eval textcaps files and utils.py

        * Update default prompt for text captions

        * Refactor textcaps_aggregation_result function

        * Add generate_submission_file function and update mathvista_aggregate_results signature

        * Update nocaps_test.yaml and nocaps_val.yaml

        * refractor internal_eval

        * Add internal evaluation datasets

        * pack multidocvqa

        * mmvet

        * Fix gpt eval timeout issue for hallubench, restore load from gpt to avoid re evaluating

        * Refractor llava wild

        * Refractor llava-bench-coco

        * Add JSON file generation for gpt evaluation details

        * mmmu

        * Remove MMBench English and Chinese tasks

        * Remove unnecessary return statement in mmbench_aggregate_test_results function

        * Fix distributed process group initialization

        * Update dataset paths and group names in mmbench test configs

        * Update import statements in cc_utils.py, cn_utils.py, and en_utils.py

        * Add torch module import

        * lint

        * Remove IconQA dataset from README.md

        * Add Multi-DocVQA and its submodules

        * Add new datasets and update task names

        * Refactor flickr_aggregation_result function to accept additional arguments

        * Add timeout kwargs in Accelerator constructor

        * Add encoding to be utf-8 for cmmmu

        * Fix llava try and catch, remove torch.distributed.init in main

        * Ds prepare script for llava

        ---------

        Co-authored-by: JvThunder <[email protected]>
        Co-authored-by: kcz358 <[email protected]>

    commit d3dfd94
    Author: Li Bo <[email protected]>
    Date:   Tue Feb 27 22:52:07 2024 +0800

        [Wandb Logger] add models, and args to wandb tables. (#55)

        * Refactor logging in lmms_eval package

        * Refactor variable names in lmms_eval package

    * add llava main in pyproject

    * Update README.md

    * Remove unnecessary dependencies and add specific version for llava_repr

    * Add dependencies for llava_repr***

    * Update README.md

    * add some docs on models and command line commands

    * remove some lines

    * typo

    * Update model_guide.md

    * Update model_guide.md

    * Update README.md

    * Update README.md

    * Update README.md

    * Fix refcocog dataset path

    * Record gpt response in eval info

    * Resolve conflict

    * Fix hallusionbench gpt json saving path

    * Rename hallubench gpt output path

    * Change remove image to check by type instead of check by names

    * More robust check by type

    * Remove unnecessary img in data

    * Forcing an empty commit.

    * Testing

    * Delete unnecessary things

    * Fix seedbench2 image issue in doc_to_text

    * Add conditional exclude for internal eval

    * Fix small bugs in list_with_num

    * Revise list_with_num model args

    * Fix logging utils bug on wandb grouping

    ---------

    Co-authored-by: Bo Li <[email protected]>
    Co-authored-by: Fanyi Pu <[email protected]>
    Co-authored-by: jzhang38 <[email protected]>

commit 7c4501a32bbb415ba7e62e93194b37ba9a435cf5
Merge: 83358a4 5e1c9c7
Author: kcz358 <[email protected]>
Date:   Sun Mar 3 07:25:48 2024 +0000

    Merge branch 'main' into kc/final_fix

commit 5c419f9fa23616a63a0bd584f18e509bb7704b50
Author: kcz358 <[email protected]>
Date:   Sun Mar 3 07:23:19 2024 +0000

    Fix logging utils bug on wandb grouping

commit 0010d0a
Author: kcz358 <[email protected]>
Date:   Sun Mar 3 13:01:11 2024 +0800

    [Fix] refcocog dataset path, record gpt prompt in internal eval, build context issue (#59)

    * Refactor logging in lmms_eval package

    * Refactor variable names in lmms_eval package

    * Update README.md with new features and installation instructions

    * Update supported models and datasets

    * Delete otter.py file

    * Fix capitalization in README.md

    * Update image sizes and add new features

    * Refactor README.md to improve readability and add new features

    * Add description for lmms-eval in README.md

    * Update accelerator support in README.md

    * Update lmms-eval README with improved description and additional features

    * Update README.md with improved task grouping description

    * change `Otter-AI/MME` to `lmms-lab/MME`

    * Update README.md

    * Update README.md

    * Remove unused code in mme.yaml

    * Squashed commit of the following:

    commit 2a45079
    Author: Zhang Peiyuan <[email protected]>
    Date:   Thu Feb 29 13:40:02 2024 +0800

        Dev/py add models (#57)

        * add instructblip

        * minicpm_v

        * remove <image> from qwen-vl

        * speed up postprocessing

        * Optimize build context speed

        ---------

        Co-authored-by: Pu Fanyi <[email protected]>
        Co-authored-by: kcz358 <[email protected]>

    commit 7bdab7a
    Author: Pu Fanyi <[email protected]>
    Date:   Wed Feb 28 14:49:07 2024 +0800

        Pufanyi/flickr30k refractor (#56)

        * refactor vizwizvqa task

        * Delete vqav2_test and vqav2_val YAML files

        * Refactor vqav2_process_results functions

        * Add a pack for vqav2

        * refactor okvqa

        * roll back vizwiz_vqa

        * Fix exact_match calculation in ok_vqa_process_results

        * Update OKVQA dataset name in readme

        * add model_specific_prompt_kwargs

        * add model_specific_prompt_kwargs to vizwiz_vqa

        * add model_specific_prompt_kwargs for vqav2

        * lint

        * fix a small bug for eval_logger

        * Refactor make_table function to display points as "  -  " if value is None

        * Merge commit '90f42f0876a4914c5ac0d213b9dffbfb4797ff62'

        * Refactor ok_vqa_aggreate_submissions function

        * Merge commit '4afec3303a0a7ed27a8265565343bf2851b9e4c7'

        * Refactor VQA submission file saving

        * Update file utils

        * Merge commit 'c144b75f0c9145a625b2bbdef5123ed81e343a11'

        * Refactor file path handling and submission generation

        * OKVQA path

        * vizwizvqa file

        * pack cmmmu

        * fix a small metric bug for cmmmu

        * Add higher_is_better flag to submission metric

        * Add CMMMU dataset to README.md

        * Add logging and refactor submission file generation in docvqa utils.py

        * pack docvqa

        * add traceback to print detailed error

        * Refactor docvqa_test_aggregate_results to accept additional arguments

        * Add metric check in evaluator.py and update test.yaml and val.yaml

        * add common `EvalAIAnswerProcessor` for okvqa, textvqa, vizwizvqa and vqav2

        * merge textvqa

        * textvqa

        * Modify submission file generation for COCO test results

        * Update test result storage path

        * update coco cap file name

        * Update COCO 2017 Caption dataset name

        * ferret

        * Add Ferret dataset

        * Refactor hb_doc_to_text function to include model-specific prompts

        * Add IconQA and its subtasks

        * Refactor image list creation in doc_to_visual function

        * Add process_results function to default template

        * Update process_results function in iconqa utils.py

        * refactor flickr30k

        * change aggregation function

        * Fix formatting issues and update logging message

        * Fix llava can not handle only text question (no visuals)

        * Fix qwen can not handle no image question (no visuals)

        * Add fuyu prepare accelerator scripts

        * refactor mme

        * naming consistency

        * aggregation_submissions consistency

        * flickr30k naming consistency

        * remove submissions for mme

        * remove unused submission function

        * Refactor infovqa_test.yaml and infovqa_val.yaml

        * Refactor code for improved readability and maintainability

        * stvqa

        * remane sqa

        * Update lmms_eval textcaps files and utils.py

        * Update default prompt for text captions

        * Refactor textcaps_aggregation_result function

        * Add generate_submission_file function and update mathvista_aggregate_results signature

        * Update nocaps_test.yaml and nocaps_val.yaml

        * refractor internal_eval

        * Add internal evaluation datasets

        * pack multidocvqa

        * mmvet

        * Fix gpt eval timeout issue for hallubench, restore load from gpt to avoid re evaluating

        * Refractor llava wild

        * Refractor llava-bench-coco

        * Add JSON file generation for gpt evaluation details

        * mmmu

        * Remove MMBench English and Chinese tasks

        * Remove unnecessary return statement in mmbench_aggregate_test_results function

        * Fix distributed process group initialization

        * Update dataset paths and group names in mmbench test configs

        * Update import statements in cc_utils.py, cn_utils.py, and en_utils.py

        * Add torch module import

        * lint

        * Remove IconQA dataset from README.md

        * Add Multi-DocVQA and its submodules

        * Add new datasets and update task names

        * Refactor flickr_aggregation_result function to accept additional arguments

        * Add timeout kwargs in Accelerator constructor

        * Add encoding to be utf-8 for cmmmu

        * Fix llava try and catch, remove torch.distributed.init in main

        * Ds prepare script for llava

        ---------

        Co-authored-by: JvThunder <[email protected]>
        Co-authored-by: kcz358 <[email protected]>

    commit d3dfd94
    Author: Li Bo <[email protected]>
    Date:   Tue Feb 27 22:52:07 2024 +0800

        [Wandb Logger] add models, and args to wandb tables. (#55)

        * Refactor logging in lmms_eval package

        * Refactor variable names in lmms_eval package

    * add llava main in pyproject

    * Update README.md

    * Remove unnecessary dependencies and add specific version for llava_repr

    * Add dependencies for llava_repr***

    * Update README.md

    * add some docs on models and command line commands

    * remove some lines

    * typo

    * Update model_guide.md

    * Update model_guide.md

    * Update README.md

    * Update README.md

    * Update README.md

    * Fix refcocog dataset path

    * Record gpt response in eval info

    * Resolve conflict

    * Fix hallusionbench gpt json saving path

    * Rename hallubench gpt output path

    * Change remove image to check by type instead of check by names

    * More robust check by type

    * Remove unnecessary img in data

    * Forcing an empty commit.

    * Testing

    * Delete unnecessary things

    * Fix seedbench2 image issue in doc_to_text

    * Add conditional exclude for internal eval

    * Fix small bugs in list_with_num

    * Revise list_with_num model args

    ---------

    Co-authored-by: Bo Li <[email protected]>
    Co-authored-by: Fanyi Pu <[email protected]>
    Co-authored-by: jzhang38 <[email protected]>

commit b2ca65d1f12d84ae7a37ecc81f760901389a1af0
Author: kcz358 <[email protected]>
Date:   Sat Mar 2 05:58:08 2024 +0000

    Revise list_with_num model args

commit a262ea1720b2c02839d21dad2a7618bc80725f18
Author: kcz358 <[email protected]>
Date:   Sat Mar 2 05:09:15 2024 +0000

    Fix small bugs in list_with_num

commit 2fbeafc882c80242a10381abc67629d5d8b7071a
Author: kcz358 <[email protected]>
Date:   Sat Mar 2 03:49:36 2024 +0000

    Add conditional exclude for internal eval

commit f188052450bed2f3a30ab6f9a6f7eb844a64cb33
Merge: a3cae8e ffb9eb2
Author: kcz358 <[email protected]>
Date:   Sat Mar 2 03:24:29 2024 +0000

    Merge branch 'dev/readme' into kc/final_fix

commit baef5905505892593fe783beb18a2de20991d6af
Author: kcz358 <[email protected]>
Date:   Sat Mar 2 02:47:31 2024 +0000

    Fix seedbench2 image issue in doc_to_text

commit 11b46f3b701b79b361dd5175a263e4d89bd07fb5
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 15:32:49 2024 +0000

    Delete unnecessary things

commit 0982de2e7a2310429e51ec7828886fd49953f716
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 15:31:42 2024 +0000

    Testing

commit f840ed80f4ae467fff62b61844854a3a9e8ec8a5
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 15:29:30 2024 +0000

    Forcing an empty commit.

commit 80db78f600d07011188983637c94da84b9475fbf
Merge: 786f2b5 1700786
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 15:24:56 2024 +0000

    Merge branch 'kc/final_fix' into dev/readme

commit 676229de870b8d465cef08867cd272a4b696e630
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 15:24:20 2024 +0000

    Remove unnecessary img in data

commit d293b96fb3537fea85f10f216d762abf35e05e8d
Merge: 4240785 888c1c1
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 13:41:24 2024 +0000

    Merge branch 'kc/final_fix' into dev/readme

commit 01bbd010590d6b7f105525580209191a1d6d5232
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 13:40:51 2024 +0000

    More robust check by type

commit 66595ebc073ff9431f2400006196c0645be58ea4
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 13:00:57 2024 +0000

    Change remove image to check by type instead of check by names

commit 08c2ebad1532fd6c34ac04efb94a268db9862d4f
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 12:33:02 2024 +0000

    Rename hallubench gpt output path

commit aefbd3c6856584135e2dcbe13381db0e0780f063
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 09:32:52 2024 +0000

    Fix hallusionbench gpt json saving path

commit b9aebc3ff3b122d6d4a81bd2f28e86b2c390c505
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 08:51:13 2024 +0000

    Resolve conflict

commit c9daa91f2576de69af73c80e263afb085ecd8288
Merge: 9cf86fa 93534dc
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 08:37:21 2024 +0000

    Merge branch 'kc/final_fix' into dev/readme

commit b1c4c88b9b36e02e9ed738ff9217d98a5ef2117b
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 07:55:03 2024 +0000

    Record gpt response in eval info

commit b35bc4a6c8fd6b4b2a68bb3054878807b8b92281
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 07:49:01 2024 +0000

    Fix refcocog dataset path

commit 2a45079
Author: Zhang Peiyuan <[email protected]>
Date:   Thu Feb 29 13:40:02 2024 +0800

    Dev/py add models (#57)

    * add instructblip

    * minicpm_v

    * remove <image> from qwen-vl

    * speed up postprocessing

    * Optimize build context speed

    ---------

    Co-authored-by: Pu Fanyi <[email protected]>
    Co-authored-by: kcz358 <[email protected]>

commit 7bdab7a
Author: Pu Fanyi <[email protected]>
Date:   Wed Feb 28 14:49:07 2024 +0800

    Pufanyi/flickr30k refractor (#56)

    * refactor vizwizvqa task

    * Delete vqav2_test and vqav2_val YAML files

    * Refactor vqav2_process_results functions

    * Add a pack for vqav2

    * refactor okvqa

    * roll back vizwiz_vqa

    * Fix exact_match calculation in ok_vqa_process_results

    * Update OKVQA dataset name in readme

    * add model_specific_prompt_kwargs

    * add model_specific_prompt_kwargs to vizwiz_vqa

    * add model_specific_prompt_kwargs for vqav2

    * lint

    * fix a small bug for eval_logger

    * Refactor make_table function to display points as "  -  " if value is None

    * Merge commit '90f42f0876a4914c5ac0d213b9dffbfb4797ff62'

    * Refactor ok_vqa_aggreate_submissions function

    * Merge commit '4afec3303a0a7ed27a8265565343bf2851b9e4c7'

    * Refactor VQA submission file saving

    * Update file utils

    * Merge commit 'c144b75f0c9145a625b2bbdef5123ed81e343a11'

    * Refactor file path handling and submission generation

    * OKVQA path

    * vizwizvqa file

    * pack cmmmu

    * fix a small metric bug for cmmmu

    * Add higher_is_better flag to submission metric

    * Add CMMMU dataset to README.md

    * Add logging and refactor submission file generation in docvqa utils.py

    * pack docvqa

    * add traceback to print detailed error

    * Refactor docvqa_test_aggregate_results to accept additional arguments

    * Add metric check in evaluator.py and update test.yaml and val.yaml

    * add common `EvalAIAnswerProcessor` for okvqa, textvqa, vizwizvqa and vqav2

    * merge textvqa

    * textvqa

    * Modify submission file generation for COCO test results

    * Update test result storage path

    * update coco cap file name

    * Update COCO 2017 Caption dataset name

    * ferret

    * Add Ferret dataset

    * Refactor hb_doc_to_text function to include model-specific prompts

    * Add IconQA and its subtasks

    * Refactor image list creation in doc_to_visual function

    * Add process_results function to default template

    * Update process_results function in iconqa utils.py

    * refactor flickr30k

    * change aggregation function

    * Fix formatting issues and update logging message

    * Fix llava can not handle only text question (no visuals)

    * Fix qwen can not handle no image question (no visuals)

    * Add fuyu prepare accelerator scripts

    * refactor mme

    * naming consistency

    * aggregation_submissions consistency

    * flickr30k naming consistency

    * remove submissions for mme

    * remove unused submission function

    * Refactor infovqa_test.yaml and infovqa_val.yaml

    * Refactor code for improved readability and maintainability

    * stvqa

    * remane sqa

    * Update lmms_eval textcaps files and utils.py

    * Update default prompt for text captions

    * Refactor textcaps_aggregation_result function

    * Add generate_submission_file function and update mathvista_aggregate_results signature

    * Update nocaps_test.yaml and nocaps_val.yaml

    * refractor internal_eval

    * Add internal evaluation datasets

    * pack multidocvqa

    * mmvet

    * Fix gpt eval timeout issue for hallubench, restore load from gpt to avoid re evaluating

    * Refractor llava wild

    * Refractor llava-bench-coco

    * Add JSON file generation for gpt evaluation details

    * mmmu

    * Remove MMBench English and Chinese tasks

    * Remove unnecessary return statement in mmbench_aggregate_test_results function

    * Fix distributed process group initialization

    * Update dataset paths and group names in mmbench test configs

    * Update import statements in cc_utils.py, cn_utils.py, and en_utils.py

    * Add torch module import

    * lint

    * Remove IconQA dataset from README.md

    * Add Multi-DocVQA and its submodules

    * Add new datasets and update task names

    * Refactor flickr_aggregation_result function to accept additional arguments

    * Add timeout kwargs in Accelerator constructor

    * Add encoding to be utf-8 for cmmmu

    * Fix llava try and catch, remove torch.distributed.init in main

    * Ds prepare script for llava

    ---------

    Co-authored-by: JvThunder <[email protected]>
    Co-authored-by: kcz358 <[email protected]>

commit d3dfd94
Author: Li Bo <[email protected]>
Date:   Tue Feb 27 22:52:07 2024 +0800

    [Wandb Logger] add models, and args to wandb tables. (#55)

    * Refactor logging in lmms_eval package

    * Refactor variable names in lmms_eval package

* Update commands.md

* Add repr_scripts for reference

* Add timeout for gpt4V

* Remove unnecessary dependencies

* Add reproduce into readme

* Revise seedbench process_result

* Fix exclude dc hardcode postprocess logic error

* Fix metric repeat issue

* Update dataset runtime and add environment info

* Revise val submission file saving path

* Put the correct query into the gpt extraction

* Update sleep time in utils.py

* update

---------

Co-authored-by: Fanyi Pu <[email protected]>
Co-authored-by: kcz358 <[email protected]>
Co-authored-by: jzhang38 <[email protected]>
Co-authored-by: kcz358 <[email protected]>
  • Loading branch information
5 people authored Mar 7, 2024
1 parent 9415396 commit 704dd69
Showing 0 changed files with 0 additions and 0 deletions.

0 comments on commit 704dd69

Please sign in to comment.