Skip to content

Code for Harnessing the Power of MLLMs for Transferable Text-to-Image Person ReID (CVPR 2024)

Notifications You must be signed in to change notification settings

WentaoTan/MLLM4Text-ReID

Repository files navigation

Harnessing the Power of MLLMs for Transferable Text-to-Image Person ReID (CVPR 2024)

Requirements

pytorch 1.9.0
torchvision 0.10.0
prettytable
easydict

1、Construct LUPerson-MLLM

  • Download the LUPerson images from here.
  • Use MLLMs to annotate LUPerson images. Take Qwen as an example. The code for image captioning is provided in the captions folder. Inside, you will find 46 templates along with static and dynamic instructions. You can download all the descriptions for the final LUPerson-MLLM from here.
  • Place the generated descriptions in the captions folder.

2、Prepare Downstream Datasets

Download the CUHK-PEDES dataset from here, ICFG-PEDES dataset from here and RSTPReid dataset form here.

3、Pretrain Model (direct transfer setting)

To pretrain your model, you can simply run sh run.sh. After the model training is completed, it will provide the performance of direct transfer setting.

4、Fine-tune the Pretrained Model on Downstream Datasets (fine-tune setting)

We release the Pretrain Model Checkpoints here.
To fine-tune your model, you can simply run sh finetune.sh --finetune checkpoint.pth. After the model training is completed, it will provide the performance of fine-tune setting.

Acknowledgments

This repo borrows partially from IRRA.

Citation

@article{tan2024harnessing,
  title={Harnessing the Power of MLLMs for Transferable Text-to-Image Person ReID},
  author={Wentao Tan, Changxing Ding, Jiayu Jiang, Fei Wang, Yibing Zhan, Dapeng Tao},
  journal={CVPR},
  year={2024},
}

Contact

Email: [email protected] or [email protected]

如果可以当然还是希望用中文contact我啦!

About

Code for Harnessing the Power of MLLMs for Transferable Text-to-Image Person ReID (CVPR 2024)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published