Skip to content

Commit

Permalink
[README] near public (EvolvingLMMs-Lab#63)
Browse files Browse the repository at this point in the history
* Refactor logging in lmms_eval package

* Refactor variable names in lmms_eval package

* Update README.md with new features and installation instructions

* Update supported models and datasets

* Delete otter.py file

* Fix capitalization in README.md

* Update image sizes and add new features

* Refactor README.md to improve readability and add new features

* Add description for lmms-eval in README.md

* Update accelerator support in README.md

* Update lmms-eval README with improved description and additional features

* Update README.md with improved task grouping description

* change `Otter-AI/MME` to `lmms-lab/MME`

* Update README.md

* Update README.md

* Remove unused code in mme.yaml

* Squashed commit of the following:

commit 9c0bc58
Author: Zhang Peiyuan <[email protected]>
Date:   Thu Feb 29 13:40:02 2024 +0800

    Dev/py add models (EvolvingLMMs-Lab#57)

    * add instructblip

    * minicpm_v

    * remove <image> from qwen-vl

    * speed up postprocessing

    * Optimize build context speed

    ---------

    Co-authored-by: Pu Fanyi <[email protected]>
    Co-authored-by: kcz358 <[email protected]>

commit 30ab0ce
Author: Pu Fanyi <[email protected]>
Date:   Wed Feb 28 14:49:07 2024 +0800

    Pufanyi/flickr30k refractor (EvolvingLMMs-Lab#56)

    * refactor vizwizvqa task

    * Delete vqav2_test and vqav2_val YAML files

    * Refactor vqav2_process_results functions

    * Add a pack for vqav2

    * refactor okvqa

    * roll back vizwiz_vqa

    * Fix exact_match calculation in ok_vqa_process_results

    * Update OKVQA dataset name in readme

    * add model_specific_prompt_kwargs

    * add model_specific_prompt_kwargs to vizwiz_vqa

    * add model_specific_prompt_kwargs for vqav2

    * lint

    * fix a small bug for eval_logger

    * Refactor make_table function to display points as "  -  " if value is None

    * Merge commit '5e73e8b8a2408bd8193361788669ca80db19cb04'

    * Refactor ok_vqa_aggreate_submissions function

    * Merge commit '40099e8b8145bde513b9b7cef8461d8f13d1eafe'

    * Refactor VQA submission file saving

    * Update file utils

    * Merge commit 'a56fe11c00ad4a8b8967be88b93baef6649528c5'

    * Refactor file path handling and submission generation

    * OKVQA path

    * vizwizvqa file

    * pack cmmmu

    * fix a small metric bug for cmmmu

    * Add higher_is_better flag to submission metric

    * Add CMMMU dataset to README.md

    * Add logging and refactor submission file generation in docvqa utils.py

    * pack docvqa

    * add traceback to print detailed error

    * Refactor docvqa_test_aggregate_results to accept additional arguments

    * Add metric check in evaluator.py and update test.yaml and val.yaml

    * add common `EvalAIAnswerProcessor` for okvqa, textvqa, vizwizvqa and vqav2

    * merge textvqa

    * textvqa

    * Modify submission file generation for COCO test results

    * Update test result storage path

    * update coco cap file name

    * Update COCO 2017 Caption dataset name

    * ferret

    * Add Ferret dataset

    * Refactor hb_doc_to_text function to include model-specific prompts

    * Add IconQA and its subtasks

    * Refactor image list creation in doc_to_visual function

    * Add process_results function to default template

    * Update process_results function in iconqa utils.py

    * refactor flickr30k

    * change aggregation function

    * Fix formatting issues and update logging message

    * Fix llava can not handle only text question (no visuals)

    * Fix qwen can not handle no image question (no visuals)

    * Add fuyu prepare accelerator scripts

    * refactor mme

    * naming consistency

    * aggregation_submissions consistency

    * flickr30k naming consistency

    * remove submissions for mme

    * remove unused submission function

    * Refactor infovqa_test.yaml and infovqa_val.yaml

    * Refactor code for improved readability and maintainability

    * stvqa

    * remane sqa

    * Update lmms_eval textcaps files and utils.py

    * Update default prompt for text captions

    * Refactor textcaps_aggregation_result function

    * Add generate_submission_file function and update mathvista_aggregate_results signature

    * Update nocaps_test.yaml and nocaps_val.yaml

    * refractor internal_eval

    * Add internal evaluation datasets

    * pack multidocvqa

    * mmvet

    * Fix gpt eval timeout issue for hallubench, restore load from gpt to avoid re evaluating

    * Refractor llava wild

    * Refractor llava-bench-coco

    * Add JSON file generation for gpt evaluation details

    * mmmu

    * Remove MMBench English and Chinese tasks

    * Remove unnecessary return statement in mmbench_aggregate_test_results function

    * Fix distributed process group initialization

    * Update dataset paths and group names in mmbench test configs

    * Update import statements in cc_utils.py, cn_utils.py, and en_utils.py

    * Add torch module import

    * lint

    * Remove IconQA dataset from README.md

    * Add Multi-DocVQA and its submodules

    * Add new datasets and update task names

    * Refactor flickr_aggregation_result function to accept additional arguments

    * Add timeout kwargs in Accelerator constructor

    * Add encoding to be utf-8 for cmmmu

    * Fix llava try and catch, remove torch.distributed.init in main

    * Ds prepare script for llava

    ---------

    Co-authored-by: JvThunder <[email protected]>
    Co-authored-by: kcz358 <[email protected]>

commit a5b07ee
Author: Li Bo <[email protected]>
Date:   Tue Feb 27 22:52:07 2024 +0800

    [Wandb Logger] add models, and args to wandb tables. (EvolvingLMMs-Lab#55)

    * Refactor logging in lmms_eval package

    * Refactor variable names in lmms_eval package

* add llava main in pyproject

* Update README.md

* Remove unnecessary dependencies and add specific version for llava_repr

* Add dependencies for llava_repr***

* Update README.md

* add some docs on models and command line commands

* remove some lines

* typo

* Update model_guide.md

* Update model_guide.md

* Update README.md

* Update README.md

* Update README.md

* Fix refcocog dataset path

* Record gpt response in eval info

* Resolve conflict

* Fix hallusionbench gpt json saving path

* Rename hallubench gpt output path

* Change remove image to check by type instead of check by names

* More robust check by type

* Add timeout to API requests

* Remove unnecessary img in data

* Forcing an empty commit.

* Testing

* Delete unnecessary things

* Fix error logging in get_chat_response function

* Fix seedbench2 image issue in doc_to_text

* Add conditional exclude for internal eval

* Squashed commit of the following:

commit 1cf38b3ad6c7799957901d836299243cc21718f5
Author: kcz358 <[email protected]>
Date:   Sat Mar 2 03:49:36 2024 +0000

    Add conditional exclude for internal eval

commit 62527c874431508b7731ad49ff1f1526104703cd
Merge: a3cae8e ffb9eb2
Author: kcz358 <[email protected]>
Date:   Sat Mar 2 03:24:29 2024 +0000

    Merge branch 'dev/readme' into kc/final_fix

commit 522f36aca8354f5efa7fff6d23bd90e885bcf1ab
Author: kcz358 <[email protected]>
Date:   Sat Mar 2 02:47:31 2024 +0000

    Fix seedbench2 image issue in doc_to_text

commit 4ee323a5b19382dbd9ba62f5002042d0746c374e
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 15:32:49 2024 +0000

    Delete unnecessary things

commit 3d3e164489cb4bd2db342ae085da9613ee7de660
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 15:31:42 2024 +0000

    Testing

commit 8a4f586d7232a4d89977cef140900728d4517b72
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 15:29:30 2024 +0000

    Forcing an empty commit.

commit 33dd5b0e0006882e735b7ea1908fdb6ad37c825a
Merge: 786f2b5 1700786
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 15:24:56 2024 +0000

    Merge branch 'kc/final_fix' into dev/readme

commit f19de3e7aaf5151d5ce9c63a2b9ee393c6282dfa
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 15:24:20 2024 +0000

    Remove unnecessary img in data

commit e1f8cad15ddc2e385a3f2a778a4af57e1072987c
Merge: 4240785 888c1c1
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 13:41:24 2024 +0000

    Merge branch 'kc/final_fix' into dev/readme

commit 472b6b1ed2d5bc10ff1d6df8e435f33dc821ad4b
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 13:40:51 2024 +0000

    More robust check by type

commit 367c021bd50068baf024bea3afde4ed58aa38b81
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 13:00:57 2024 +0000

    Change remove image to check by type instead of check by names

commit 0a466e16d983392cbf0580733500c0890521df93
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 12:33:02 2024 +0000

    Rename hallubench gpt output path

commit 6feceda2c1d631243c78fd7805dcdde4d0e8912f
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 09:32:52 2024 +0000

    Fix hallusionbench gpt json saving path

commit db1f731ee5aff4618edefed018e982f83add0c9a
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 08:51:13 2024 +0000

    Resolve conflict

commit c8a5e1129310ed1ce1fd86f43bb49da701140383
Merge: 9cf86fa 93534dc
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 08:37:21 2024 +0000

    Merge branch 'kc/final_fix' into dev/readme

commit de53ceaeff08dc7c01962c704e06d7b87f804ec7
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 07:55:03 2024 +0000

    Record gpt response in eval info

commit e372631e911f2e03cc4f579e291e1198c4c11298
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 07:49:01 2024 +0000

    Fix refcocog dataset path

commit 9c0bc58
Author: Zhang Peiyuan <[email protected]>
Date:   Thu Feb 29 13:40:02 2024 +0800

    Dev/py add models (EvolvingLMMs-Lab#57)

    * add instructblip

    * minicpm_v

    * remove <image> from qwen-vl

    * speed up postprocessing

    * Optimize build context speed

    ---------

    Co-authored-by: Pu Fanyi <[email protected]>
    Co-authored-by: kcz358 <[email protected]>

commit 30ab0ce
Author: Pu Fanyi <[email protected]>
Date:   Wed Feb 28 14:49:07 2024 +0800

    Pufanyi/flickr30k refractor (EvolvingLMMs-Lab#56)

    * refactor vizwizvqa task

    * Delete vqav2_test and vqav2_val YAML files

    * Refactor vqav2_process_results functions

    * Add a pack for vqav2

    * refactor okvqa

    * roll back vizwiz_vqa

    * Fix exact_match calculation in ok_vqa_process_results

    * Update OKVQA dataset name in readme

    * add model_specific_prompt_kwargs

    * add model_specific_prompt_kwargs to vizwiz_vqa

    * add model_specific_prompt_kwargs for vqav2

    * lint

    * fix a small bug for eval_logger

    * Refactor make_table function to display points as "  -  " if value is None

    * Merge commit '5e73e8b8a2408bd8193361788669ca80db19cb04'

    * Refactor ok_vqa_aggreate_submissions function

    * Merge commit '40099e8b8145bde513b9b7cef8461d8f13d1eafe'

    * Refactor VQA submission file saving

    * Update file utils

    * Merge commit 'a56fe11c00ad4a8b8967be88b93baef6649528c5'

    * Refactor file path handling and submission generation

    * OKVQA path

    * vizwizvqa file

    * pack cmmmu

    * fix a small metric bug for cmmmu

    * Add higher_is_better flag to submission metric

    * Add CMMMU dataset to README.md

    * Add logging and refactor submission file generation in docvqa utils.py

    * pack docvqa

    * add traceback to print detailed error

    * Refactor docvqa_test_aggregate_results to accept additional arguments

    * Add metric check in evaluator.py and update test.yaml and val.yaml

    * add common `EvalAIAnswerProcessor` for okvqa, textvqa, vizwizvqa and vqav2

    * merge textvqa

    * textvqa

    * Modify submission file generation for COCO test results

    * Update test result storage path

    * update coco cap file name

    * Update COCO 2017 Caption dataset name

    * ferret

    * Add Ferret dataset

    * Refactor hb_doc_to_text function to include model-specific prompts

    * Add IconQA and its subtasks

    * Refactor image list creation in doc_to_visual function

    * Add process_results function to default template

    * Update process_results function in iconqa utils.py

    * refactor flickr30k

    * change aggregation function

    * Fix formatting issues and update logging message

    * Fix llava can not handle only text question (no visuals)

    * Fix qwen can not handle no image question (no visuals)

    * Add fuyu prepare accelerator scripts

    * refactor mme

    * naming consistency

    * aggregation_submissions consistency

    * flickr30k naming consistency

    * remove submissions for mme

    * remove unused submission function

    * Refactor infovqa_test.yaml and infovqa_val.yaml

    * Refactor code for improved readability and maintainability

    * stvqa

    * remane sqa

    * Update lmms_eval textcaps files and utils.py

    * Update default prompt for text captions

    * Refactor textcaps_aggregation_result function

    * Add generate_submission_file function and update mathvista_aggregate_results signature

    * Update nocaps_test.yaml and nocaps_val.yaml

    * refractor internal_eval

    * Add internal evaluation datasets

    * pack multidocvqa

    * mmvet

    * Fix gpt eval timeout issue for hallubench, restore load from gpt to avoid re evaluating

    * Refractor llava wild

    * Refractor llava-bench-coco

    * Add JSON file generation for gpt evaluation details

    * mmmu

    * Remove MMBench English and Chinese tasks

    * Remove unnecessary return statement in mmbench_aggregate_test_results function

    * Fix distributed process group initialization

    * Update dataset paths and group names in mmbench test configs

    * Update import statements in cc_utils.py, cn_utils.py, and en_utils.py

    * Add torch module import

    * lint

    * Remove IconQA dataset from README.md

    * Add Multi-DocVQA and its submodules

    * Add new datasets and update task names

    * Refactor flickr_aggregation_result function to accept additional arguments

    * Add timeout kwargs in Accelerator constructor

    * Add encoding to be utf-8 for cmmmu

    * Fix llava try and catch, remove torch.distributed.init in main

    * Ds prepare script for llava

    ---------

    Co-authored-by: JvThunder <[email protected]>
    Co-authored-by: kcz358 <[email protected]>

commit a5b07ee
Author: Li Bo <[email protected]>
Date:   Tue Feb 27 22:52:07 2024 +0800

    [Wandb Logger] add models, and args to wandb tables. (EvolvingLMMs-Lab#55)

    * Refactor logging in lmms_eval package

    * Refactor variable names in lmms_eval package

* Fix small bugs in list_with_num

* Revise list_with_num model args

* Dev/readme rm rolling (EvolvingLMMs-Lab#60)

* remove log_likelyhood_rolling

* Update time efficiency benchmark in README.md

* add task guide

---------

Co-authored-by: jzhang38 <[email protected]>
Co-authored-by: kcz358 <[email protected]>

* Remove unnecessary code and update dependencies

* Fix logging utils bug on wandb grouping

* Add reproduce envs

* Squashed commit of the following:

commit cf18d7a1300311ffe1c9671fff7fa0c0d1cf2476
Merge: 2475639 f89a736
Author: kcz358 <[email protected]>
Date:   Sun Mar 3 22:12:12 2024 +0800

    Merge branch 'main' into kc/final_fix

commit 35e5a937bcf924d6b787ce37c6da9f0f54674da9
Author: kcz358 <[email protected]>
Date:   Sun Mar 3 22:11:04 2024 +0800

    Add reproduce envs

commit 13179f9
Author: kcz358 <[email protected]>
Date:   Sun Mar 3 21:19:15 2024 +0800

    [Fix] wandb group logging missing columns (EvolvingLMMs-Lab#61)

    * Refactor logging in lmms_eval package

    * Refactor variable names in lmms_eval package

    * Update README.md with new features and installation instructions

    * Update supported models and datasets

    * Delete otter.py file

    * Fix capitalization in README.md

    * Update image sizes and add new features

    * Refactor README.md to improve readability and add new features

    * Add description for lmms-eval in README.md

    * Update accelerator support in README.md

    * Update lmms-eval README with improved description and additional features

    * Update README.md with improved task grouping description

    * change `Otter-AI/MME` to `lmms-lab/MME`

    * Update README.md

    * Update README.md

    * Remove unused code in mme.yaml

    * Squashed commit of the following:

    commit 9c0bc58
    Author: Zhang Peiyuan <[email protected]>
    Date:   Thu Feb 29 13:40:02 2024 +0800

        Dev/py add models (EvolvingLMMs-Lab#57)

        * add instructblip

        * minicpm_v

        * remove <image> from qwen-vl

        * speed up postprocessing

        * Optimize build context speed

        ---------

        Co-authored-by: Pu Fanyi <[email protected]>
        Co-authored-by: kcz358 <[email protected]>

    commit 30ab0ce
    Author: Pu Fanyi <[email protected]>
    Date:   Wed Feb 28 14:49:07 2024 +0800

        Pufanyi/flickr30k refractor (EvolvingLMMs-Lab#56)

        * refactor vizwizvqa task

        * Delete vqav2_test and vqav2_val YAML files

        * Refactor vqav2_process_results functions

        * Add a pack for vqav2

        * refactor okvqa

        * roll back vizwiz_vqa

        * Fix exact_match calculation in ok_vqa_process_results

        * Update OKVQA dataset name in readme

        * add model_specific_prompt_kwargs

        * add model_specific_prompt_kwargs to vizwiz_vqa

        * add model_specific_prompt_kwargs for vqav2

        * lint

        * fix a small bug for eval_logger

        * Refactor make_table function to display points as "  -  " if value is None

        * Merge commit '5e73e8b8a2408bd8193361788669ca80db19cb04'

        * Refactor ok_vqa_aggreate_submissions function

        * Merge commit '40099e8b8145bde513b9b7cef8461d8f13d1eafe'

        * Refactor VQA submission file saving

        * Update file utils

        * Merge commit 'a56fe11c00ad4a8b8967be88b93baef6649528c5'

        * Refactor file path handling and submission generation

        * OKVQA path

        * vizwizvqa file

        * pack cmmmu

        * fix a small metric bug for cmmmu

        * Add higher_is_better flag to submission metric

        * Add CMMMU dataset to README.md

        * Add logging and refactor submission file generation in docvqa utils.py

        * pack docvqa

        * add traceback to print detailed error

        * Refactor docvqa_test_aggregate_results to accept additional arguments

        * Add metric check in evaluator.py and update test.yaml and val.yaml

        * add common `EvalAIAnswerProcessor` for okvqa, textvqa, vizwizvqa and vqav2

        * merge textvqa

        * textvqa

        * Modify submission file generation for COCO test results

        * Update test result storage path

        * update coco cap file name

        * Update COCO 2017 Caption dataset name

        * ferret

        * Add Ferret dataset

        * Refactor hb_doc_to_text function to include model-specific prompts

        * Add IconQA and its subtasks

        * Refactor image list creation in doc_to_visual function

        * Add process_results function to default template

        * Update process_results function in iconqa utils.py

        * refactor flickr30k

        * change aggregation function

        * Fix formatting issues and update logging message

        * Fix llava can not handle only text question (no visuals)

        * Fix qwen can not handle no image question (no visuals)

        * Add fuyu prepare accelerator scripts

        * refactor mme

        * naming consistency

        * aggregation_submissions consistency

        * flickr30k naming consistency

        * remove submissions for mme

        * remove unused submission function

        * Refactor infovqa_test.yaml and infovqa_val.yaml

        * Refactor code for improved readability and maintainability

        * stvqa

        * remane sqa

        * Update lmms_eval textcaps files and utils.py

        * Update default prompt for text captions

        * Refactor textcaps_aggregation_result function

        * Add generate_submission_file function and update mathvista_aggregate_results signature

        * Update nocaps_test.yaml and nocaps_val.yaml

        * refractor internal_eval

        * Add internal evaluation datasets

        * pack multidocvqa

        * mmvet

        * Fix gpt eval timeout issue for hallubench, restore load from gpt to avoid re evaluating

        * Refractor llava wild

        * Refractor llava-bench-coco

        * Add JSON file generation for gpt evaluation details

        * mmmu

        * Remove MMBench English and Chinese tasks

        * Remove unnecessary return statement in mmbench_aggregate_test_results function

        * Fix distributed process group initialization

        * Update dataset paths and group names in mmbench test configs

        * Update import statements in cc_utils.py, cn_utils.py, and en_utils.py

        * Add torch module import

        * lint

        * Remove IconQA dataset from README.md

        * Add Multi-DocVQA and its submodules

        * Add new datasets and update task names

        * Refactor flickr_aggregation_result function to accept additional arguments

        * Add timeout kwargs in Accelerator constructor

        * Add encoding to be utf-8 for cmmmu

        * Fix llava try and catch, remove torch.distributed.init in main

        * Ds prepare script for llava

        ---------

        Co-authored-by: JvThunder <[email protected]>
        Co-authored-by: kcz358 <[email protected]>

    commit a5b07ee
    Author: Li Bo <[email protected]>
    Date:   Tue Feb 27 22:52:07 2024 +0800

        [Wandb Logger] add models, and args to wandb tables. (EvolvingLMMs-Lab#55)

        * Refactor logging in lmms_eval package

        * Refactor variable names in lmms_eval package

    * add llava main in pyproject

    * Update README.md

    * Remove unnecessary dependencies and add specific version for llava_repr

    * Add dependencies for llava_repr***

    * Update README.md

    * add some docs on models and command line commands

    * remove some lines

    * typo

    * Update model_guide.md

    * Update model_guide.md

    * Update README.md

    * Update README.md

    * Update README.md

    * Fix refcocog dataset path

    * Record gpt response in eval info

    * Resolve conflict

    * Fix hallusionbench gpt json saving path

    * Rename hallubench gpt output path

    * Change remove image to check by type instead of check by names

    * More robust check by type

    * Remove unnecessary img in data

    * Forcing an empty commit.

    * Testing

    * Delete unnecessary things

    * Fix seedbench2 image issue in doc_to_text

    * Add conditional exclude for internal eval

    * Fix small bugs in list_with_num

    * Revise list_with_num model args

    * Fix logging utils bug on wandb grouping

    ---------

    Co-authored-by: Bo Li <[email protected]>
    Co-authored-by: Fanyi Pu <[email protected]>
    Co-authored-by: jzhang38 <[email protected]>

commit 39ce670fb1992c5e30d4b0eff9636a88a1ce83f5
Merge: 83358a4 5e1c9c7
Author: kcz358 <[email protected]>
Date:   Sun Mar 3 07:25:48 2024 +0000

    Merge branch 'main' into kc/final_fix

commit 36eeaa08730cd3e6a7e90e7000f61b4ebb075524
Author: kcz358 <[email protected]>
Date:   Sun Mar 3 07:23:19 2024 +0000

    Fix logging utils bug on wandb grouping

commit 9ac7212
Author: kcz358 <[email protected]>
Date:   Sun Mar 3 13:01:11 2024 +0800

    [Fix] refcocog dataset path, record gpt prompt in internal eval, build context issue (EvolvingLMMs-Lab#59)

    * Refactor logging in lmms_eval package

    * Refactor variable names in lmms_eval package

    * Update README.md with new features and installation instructions

    * Update supported models and datasets

    * Delete otter.py file

    * Fix capitalization in README.md

    * Update image sizes and add new features

    * Refactor README.md to improve readability and add new features

    * Add description for lmms-eval in README.md

    * Update accelerator support in README.md

    * Update lmms-eval README with improved description and additional features

    * Update README.md with improved task grouping description

    * change `Otter-AI/MME` to `lmms-lab/MME`

    * Update README.md

    * Update README.md

    * Remove unused code in mme.yaml

    * Squashed commit of the following:

    commit 9c0bc58
    Author: Zhang Peiyuan <[email protected]>
    Date:   Thu Feb 29 13:40:02 2024 +0800

        Dev/py add models (EvolvingLMMs-Lab#57)

        * add instructblip

        * minicpm_v

        * remove <image> from qwen-vl

        * speed up postprocessing

        * Optimize build context speed

        ---------

        Co-authored-by: Pu Fanyi <[email protected]>
        Co-authored-by: kcz358 <[email protected]>

    commit 30ab0ce
    Author: Pu Fanyi <[email protected]>
    Date:   Wed Feb 28 14:49:07 2024 +0800

        Pufanyi/flickr30k refractor (EvolvingLMMs-Lab#56)

        * refactor vizwizvqa task

        * Delete vqav2_test and vqav2_val YAML files

        * Refactor vqav2_process_results functions

        * Add a pack for vqav2

        * refactor okvqa

        * roll back vizwiz_vqa

        * Fix exact_match calculation in ok_vqa_process_results

        * Update OKVQA dataset name in readme

        * add model_specific_prompt_kwargs

        * add model_specific_prompt_kwargs to vizwiz_vqa

        * add model_specific_prompt_kwargs for vqav2

        * lint

        * fix a small bug for eval_logger

        * Refactor make_table function to display points as "  -  " if value is None

        * Merge commit '5e73e8b8a2408bd8193361788669ca80db19cb04'

        * Refactor ok_vqa_aggreate_submissions function

        * Merge commit '40099e8b8145bde513b9b7cef8461d8f13d1eafe'

        * Refactor VQA submission file saving

        * Update file utils

        * Merge commit 'a56fe11c00ad4a8b8967be88b93baef6649528c5'

        * Refactor file path handling and submission generation

        * OKVQA path

        * vizwizvqa file

        * pack cmmmu

        * fix a small metric bug for cmmmu

        * Add higher_is_better flag to submission metric

        * Add CMMMU dataset to README.md

        * Add logging and refactor submission file generation in docvqa utils.py

        * pack docvqa

        * add traceback to print detailed error

        * Refactor docvqa_test_aggregate_results to accept additional arguments

        * Add metric check in evaluator.py and update test.yaml and val.yaml

        * add common `EvalAIAnswerProcessor` for okvqa, textvqa, vizwizvqa and vqav2

        * merge textvqa

        * textvqa

        * Modify submission file generation for COCO test results

        * Update test result storage path

        * update coco cap file name

        * Update COCO 2017 Caption dataset name

        * ferret

        * Add Ferret dataset

        * Refactor hb_doc_to_text function to include model-specific prompts

        * Add IconQA and its subtasks

        * Refactor image list creation in doc_to_visual function

        * Add process_results function to default template

        * Update process_results function in iconqa utils.py

        * refactor flickr30k

        * change aggregation function

        * Fix formatting issues and update logging message

        * Fix llava can not handle only text question (no visuals)

        * Fix qwen can not handle no image question (no visuals)

        * Add fuyu prepare accelerator scripts

        * refactor mme

        * naming consistency

        * aggregation_submissions consistency

        * flickr30k naming consistency

        * remove submissions for mme

        * remove unused submission function

        * Refactor infovqa_test.yaml and infovqa_val.yaml

        * Refactor code for improved readability and maintainability

        * stvqa

        * remane sqa

        * Update lmms_eval textcaps files and utils.py

        * Update default prompt for text captions

        * Refactor textcaps_aggregation_result function

        * Add generate_submission_file function and update mathvista_aggregate_results signature

        * Update nocaps_test.yaml and nocaps_val.yaml

        * refractor internal_eval

        * Add internal evaluation datasets

        * pack multidocvqa

        * mmvet

        * Fix gpt eval timeout issue for hallubench, restore load from gpt to avoid re evaluating

        * Refractor llava wild

        * Refractor llava-bench-coco

        * Add JSON file generation for gpt evaluation details

        * mmmu

        * Remove MMBench English and Chinese tasks

        * Remove unnecessary return statement in mmbench_aggregate_test_results function

        * Fix distributed process group initialization

        * Update dataset paths and group names in mmbench test configs

        * Update import statements in cc_utils.py, cn_utils.py, and en_utils.py

        * Add torch module import

        * lint

        * Remove IconQA dataset from README.md

        * Add Multi-DocVQA and its submodules

        * Add new datasets and update task names

        * Refactor flickr_aggregation_result function to accept additional arguments

        * Add timeout kwargs in Accelerator constructor

        * Add encoding to be utf-8 for cmmmu

        * Fix llava try and catch, remove torch.distributed.init in main

        * Ds prepare script for llava

        ---------

        Co-authored-by: JvThunder <[email protected]>
        Co-authored-by: kcz358 <[email protected]>

    commit a5b07ee
    Author: Li Bo <[email protected]>
    Date:   Tue Feb 27 22:52:07 2024 +0800

        [Wandb Logger] add models, and args to wandb tables. (EvolvingLMMs-Lab#55)

        * Refactor logging in lmms_eval package

        * Refactor variable names in lmms_eval package

    * add llava main in pyproject

    * Update README.md

    * Remove unnecessary dependencies and add specific version for llava_repr

    * Add dependencies for llava_repr***

    * Update README.md

    * add some docs on models and command line commands

    * remove some lines

    * typo

    * Update model_guide.md

    * Update model_guide.md

    * Update README.md

    * Update README.md

    * Update README.md

    * Fix refcocog dataset path

    * Record gpt response in eval info

    * Resolve conflict

    * Fix hallusionbench gpt json saving path

    * Rename hallubench gpt output path

    * Change remove image to check by type instead of check by names

    * More robust check by type

    * Remove unnecessary img in data

    * Forcing an empty commit.

    * Testing

    * Delete unnecessary things

    * Fix seedbench2 image issue in doc_to_text

    * Add conditional exclude for internal eval

    * Fix small bugs in list_with_num

    * Revise list_with_num model args

    ---------

    Co-authored-by: Bo Li <[email protected]>
    Co-authored-by: Fanyi Pu <[email protected]>
    Co-authored-by: jzhang38 <[email protected]>

commit 22fda28d8aa2a53405f15d179ea9baaf53a19b0b
Author: kcz358 <[email protected]>
Date:   Sat Mar 2 05:58:08 2024 +0000

    Revise list_with_num model args

commit 48d92eb823b7929ea4c7b0da9f2284ec194c71cf
Author: kcz358 <[email protected]>
Date:   Sat Mar 2 05:09:15 2024 +0000

    Fix small bugs in list_with_num

commit 1cf38b3ad6c7799957901d836299243cc21718f5
Author: kcz358 <[email protected]>
Date:   Sat Mar 2 03:49:36 2024 +0000

    Add conditional exclude for internal eval

commit 62527c874431508b7731ad49ff1f1526104703cd
Merge: a3cae8e ffb9eb2
Author: kcz358 <[email protected]>
Date:   Sat Mar 2 03:24:29 2024 +0000

    Merge branch 'dev/readme' into kc/final_fix

commit 522f36aca8354f5efa7fff6d23bd90e885bcf1ab
Author: kcz358 <[email protected]>
Date:   Sat Mar 2 02:47:31 2024 +0000

    Fix seedbench2 image issue in doc_to_text

commit 4ee323a5b19382dbd9ba62f5002042d0746c374e
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 15:32:49 2024 +0000

    Delete unnecessary things

commit 3d3e164489cb4bd2db342ae085da9613ee7de660
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 15:31:42 2024 +0000

    Testing

commit 8a4f586d7232a4d89977cef140900728d4517b72
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 15:29:30 2024 +0000

    Forcing an empty commit.

commit 33dd5b0e0006882e735b7ea1908fdb6ad37c825a
Merge: 786f2b5 1700786
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 15:24:56 2024 +0000

    Merge branch 'kc/final_fix' into dev/readme

commit f19de3e7aaf5151d5ce9c63a2b9ee393c6282dfa
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 15:24:20 2024 +0000

    Remove unnecessary img in data

commit e1f8cad15ddc2e385a3f2a778a4af57e1072987c
Merge: 4240785 888c1c1
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 13:41:24 2024 +0000

    Merge branch 'kc/final_fix' into dev/readme

commit 472b6b1ed2d5bc10ff1d6df8e435f33dc821ad4b
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 13:40:51 2024 +0000

    More robust check by type

commit 367c021bd50068baf024bea3afde4ed58aa38b81
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 13:00:57 2024 +0000

    Change remove image to check by type instead of check by names

commit 0a466e16d983392cbf0580733500c0890521df93
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 12:33:02 2024 +0000

    Rename hallubench gpt output path

commit 6feceda2c1d631243c78fd7805dcdde4d0e8912f
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 09:32:52 2024 +0000

    Fix hallusionbench gpt json saving path

commit db1f731ee5aff4618edefed018e982f83add0c9a
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 08:51:13 2024 +0000

    Resolve conflict

commit c8a5e1129310ed1ce1fd86f43bb49da701140383
Merge: 9cf86fa 93534dc
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 08:37:21 2024 +0000

    Merge branch 'kc/final_fix' into dev/readme

commit de53ceaeff08dc7c01962c704e06d7b87f804ec7
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 07:55:03 2024 +0000

    Record gpt response in eval info

commit e372631e911f2e03cc4f579e291e1198c4c11298
Author: kcz358 <[email protected]>
Date:   Fri Mar 1 07:49:01 2024 +0000

    Fix refcocog dataset path

commit 9c0bc58
Author: Zhang Peiyuan <[email protected]>
Date:   Thu Feb 29 13:40:02 2024 +0800

    Dev/py add models (EvolvingLMMs-Lab#57)

    * add instructblip

    * minicpm_v

    * remove <image> from qwen-vl

    * speed up postprocessing

    * Optimize build context speed

    ---------

    Co-authored-by: Pu Fanyi <[email protected]>
    Co-authored-by: kcz358 <[email protected]>

commit 30ab0ce
Author: Pu Fanyi <[email protected]>
Date:   Wed Feb 28 14:49:07 2024 +0800

    Pufanyi/flickr30k refractor (EvolvingLMMs-Lab#56)

    * refactor vizwizvqa task

    * Delete vqav2_test and vqav2_val YAML files

    * Refactor vqav2_process_results functions

    * Add a pack for vqav2

    * refactor okvqa

    * roll back vizwiz_vqa

    * Fix exact_match calculation in ok_vqa_process_results

    * Update OKVQA dataset name in readme

    * add model_specific_prompt_kwargs

    * add model_specific_prompt_kwargs to vizwiz_vqa

    * add model_specific_prompt_kwargs for vqav2

    * lint

    * fix a small bug for eval_logger

    * Refactor make_table function to display points as "  -  " if value is None

    * Merge commit '5e73e8b8a2408bd8193361788669ca80db19cb04'

    * Refactor ok_vqa_aggreate_submissions function

    * Merge commit '40099e8b8145bde513b9b7cef8461d8f13d1eafe'

    * Refactor VQA submission file saving

    * Update file utils

    * Merge commit 'a56fe11c00ad4a8b8967be88b93baef6649528c5'

    * Refactor file path handling and submission generation

    * OKVQA path

    * vizwizvqa file

    * pack cmmmu

    * fix a small metric bug for cmmmu

    * Add higher_is_better flag to submission metric

    * Add CMMMU dataset to README.md

    * Add logging and refactor submission file generation in docvqa utils.py

    * pack docvqa

    * add traceback to print detailed error

    * Refactor docvqa_test_aggregate_results to accept additional arguments

    * Add metric check in evaluator.py and update test.yaml and val.yaml

    * add common `EvalAIAnswerProcessor` for okvqa, textvqa, vizwizvqa and vqav2

    * merge textvqa

    * textvqa

    * Modify submission file generation for COCO test results

    * Update test result storage path

    * update coco cap file name

    * Update COCO 2017 Caption dataset name

    * ferret

    * Add Ferret dataset

    * Refactor hb_doc_to_text function to include model-specific prompts

    * Add IconQA and its subtasks

    * Refactor image list creation in doc_to_visual function

    * Add process_results function to default template

    * Update process_results function in iconqa utils.py

    * refactor flickr30k

    * change aggregation function

    * Fix formatting issues and update logging message

    * Fix llava can not handle only text question (no visuals)

    * Fix qwen can not handle no image question (no visuals)

    * Add fuyu prepare accelerator scripts

    * refactor mme

    * naming consistency

    * aggregation_submissions consistency

    * flickr30k naming consistency

    * remove submissions for mme

    * remove unused submission function

    * Refactor infovqa_test.yaml and infovqa_val.yaml

    * Refactor code for improved readability and maintainability

    * stvqa

    * remane sqa

    * Update lmms_eval textcaps files and utils.py

    * Update default prompt for text captions

    * Refactor textcaps_aggregation_result function

    * Add generate_submission_file function and update mathvista_aggregate_results signature

    * Update nocaps_test.yaml and nocaps_val.yaml

    * refractor internal_eval

    * Add internal evaluation datasets

    * pack multidocvqa

    * mmvet

    * Fix gpt eval timeout issue for hallubench, restore load from gpt to avoid re evaluating

    * Refractor llava wild

    * Refractor llava-bench-coco

    * Add JSON file generation for gpt evaluation details

    * mmmu

    * Remove MMBench English and Chinese tasks

    * Remove unnecessary return statement in mmbench_aggregate_test_results function

    * Fix distributed process group initialization

    * Update dataset paths and group names in mmbench test configs

    * Update import statements in cc_utils.py, cn_utils.py, and en_utils.py

    * Add torch module import

    * lint

    * Remove IconQA dataset from README.md

    * Add Multi-DocVQA and its submodules

    * Add new datasets and update task names

    * Refactor flickr_aggregation_result function to accept additional arguments

    * Add timeout kwargs in Accelerator constructor

    * Add encoding to be utf-8 for cmmmu

    * Fix llava try and catch, remove torch.distributed.init in main

    * Ds prepare script for llava

    ---------

    Co-authored-by: JvThunder <[email protected]>
    Co-authored-by: kcz358 <[email protected]>

commit a5b07ee
Author: Li Bo <[email protected]>
Date:   Tue Feb 27 22:52:07 2024 +0800

    [Wandb Logger] add models, and args to wandb tables. (EvolvingLMMs-Lab#55)

    * Refactor logging in lmms_eval package

    * Refactor variable names in lmms_eval package

* Update commands.md

* Add repr_scripts for reference

* Add timeout for gpt4V

* Remove unnecessary dependencies

* Add reproduce into readme

* Revise seedbench process_result

* Fix exclude dc hardcode postprocess logic error

* Fix metric repeat issue

* Update dataset runtime and add environment info

* Revise val submission file saving path

* Put the correct query into the gpt extraction

* Update sleep time in utils.py

* update

---------

Co-authored-by: Fanyi Pu <[email protected]>
Co-authored-by: kcz358 <[email protected]>
Co-authored-by: jzhang38 <[email protected]>
Co-authored-by: kcz358 <[email protected]>
  • Loading branch information
5 people authored Mar 7, 2024
1 parent 537b5b0 commit 84260d3
Showing 0 changed files with 0 additions and 0 deletions.

0 comments on commit 84260d3

Please sign in to comment.