Skip to content

Pytorch implement of Person re-identification baseline. We arrived Rank@1=88.24%, mAP=70.68% only with softmax loss.

License

Notifications You must be signed in to change notification settings

wang5566/Person_reID_baseline_pytorch

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

36 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Person_reID_baseline_pytorch

Baseline Code (with bottleneck) for Person-reID (pytorch). It is consistent with the new baseline result in Beyond Part Models: Person Retrieval with Refined Part Pooling and Camera Style Adaptation for Person Re-identification.

We arrived Rank@1=88.24%, mAP=70.68% only with softmax loss.

Here we provide hyperparameters and architectures, that were used to generate the result. Some of them (i.e. learning rate) are far from optimal. Do not hesitate to change them and see the effect.

P.S. With similar structure, we arrived Rank@1=86.85% mAP=67.29% with Matconvnet. (batchsize=8, dropout=0.75) Different framework need to be tuned in a different way.

What's new: I add some code to generate training curves. The figure will be saved into the model folder when training.

Model Structure

You may learn more from model.py. We add one linear layer(bottleneck), one batchnorm layer and relu.

Prerequisites

  • Python 3.6
  • GPU Memory >= 6G

Getting started

Installation

git clone https://github.com/pytorch/vision
cd vision
python setup.py install

Because pytorch and torchvision are ongoing projects.

Here we noted that our code is tested based on Pytorch 0.3.0 and Torchvision 0.2.0.

Dataset & Preparation

Download Market1501 Dataset

Preparation: Put the images with the same id in one folder. You may use

python prepare.py

Remember to change the dataset path to your own path.

Futhermore, you also can test our code on DukeMTMC-reID Dataset. Our baseline code is not such high on DukeMTMC-reID Rank@1=64.23%, mAP=43.92%. Hyperparameters are need to be tuned.

To save trained model, we make a dir.

mkdir model 

Train

Train a model by

python train.py --gpu_ids 0 --name ft_ResNet50 --train_all --batchsize 32  --data_dir your_data_path

--gpu_ids which gpu to run.

--name the name of model.

--data_dir the path of the training data.

--train_all using all images to train.

--batchsize batch size.

Test

Use trained model to extract feature by

python test.py --gpu_ids 0 --name ft_ResNet50 --test_dir your_data_path  --which_epoch 59

--gpu_ids which gpu to run.

--name the dir name of trained model.

--which_epoch select the i-th model.

--data_dir the path of the testing data.

Evaluation

python evaluation.py

It will output Rank@1, Rank@5, Rank@10 and mAP results.

For mAP calculation, you also can refer to the C++ code for Oxford Building. We use the triangle mAP calculation (consistent with the Market1501 original code).

Ablation Study

The model is based on Resnet50. Input images are resized to 256x128. Here we just show some results.

Note that the result may contain around 1% bias.(For example, 50th-epoch model can be better.)

BatchSize Dropout Rank@1 mAP Note
16 0.5 86.67 68.19
32 0.5 87.98 69.38
32 0.5 88.24 70.68 test with 288x144
32 0.5 87.14 68.90 0.1 color jitter
64 0.5 86.82 67.48
64 0.5 85.78 65.97 0.1 color jitter
64 0.5 85.42 65.29 0.4 color jitter
64 0.75 84.86 66.06
96 0.5 86.05 67.03
96 0.75 85.66 66.44

Bottleneck

Test with 144x288, dropout rate is 0.5

BatchSize Bottleneck Rank@1 mAP Note
32 256 87.26 69.92
32 512 88.24 70.68
32 1024 84.29 64.00

Citation

As far as I know, the following papers may be the first two to use the bottleneck baseline. You may cite them in your paper.

@article{DBLP:journals/corr/SunZDW17,
  author    = {Yifan Sun and
               Liang Zheng and
               Weijian Deng and
               Shengjin Wang},
  title     = {SVDNet for Pedestrian Retrieval},
  booktitle   = {ICCV},
  year      = {2017},
}

@article{hermans2017defense,
  title={In Defense of the Triplet Loss for Person Re-Identification},
  author={Hermans, Alexander and Beyer, Lucas and Leibe, Bastian},
  journal={arXiv preprint arXiv:1703.07737},
  year={2017}
}

About

Pytorch implement of Person re-identification baseline. We arrived Rank@1=88.24%, mAP=70.68% only with softmax loss.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%