Skip to content

Meta R-CNN : Towards General Solver for Instance-level Low-shot Learning

Notifications You must be signed in to change notification settings

hyzcn/MetaR-CNN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Meta R-CNN : Towards General Solver for Instance-level Low-shot Learning.

Code for reproducing the results in the following paper, and the code is built on top of jwyang/faster-rcnn.pytorch

Meta R-CNN : Towards General Solver for Instance-level Low-shot Learning

Xiaopeng Yan*, Ziliang Chen*, Anni Xu, Xiaoxi Wang, Xiaodan Liang, Liang Lin

Sun Yat-Sen University, Presented at IEEE International Conference on Computer Vision (ICCV2019)

License

For Academic Research Use Only!

Requirements

  • python packages

    • PyTorch = 0.3.1

      This project can not support pytorch 0.4, higher version will not recur results.

    • Torchvision >= 0.2.0

    • cython

    • pyyaml

    • easydict

    • opencv-python

    • matplotlib

    • numpy

    • scipy

    • tensorboardX

      You can install above package using pip:

      pip install Cython easydict matplotlib opencv-python pyyaml scipy
  • CUDA 8.0

  • gcc >= 4.9

Misc

Tested on Ubuntu 14.04 with a Titan X GPU (12G) and Intel(R) Xeon(R) CPU E5-2623 v3 @ 3.00GHz.

Getting Started

Clone the repo:

https://github.com/yanxp/MetaR-CNN.git

Compilation

Compile the CUDA dependencies:

cd {repo_root}/lib
sh make.sh

It will compile all the modules you need, including NMS, ROI_Pooing, ROI_Crop and ROI_Align.

Data Preparation

Create a data folder under the repo,

cd {repo_root}
mkdir data

PASCAL_VOC 07+12: Please follow the instructions in py-faster-rcnn to prepare VOC datasets. Actually, you can refer to any others. After downloading the data, create softlinks in the folder data/.

please download the three base classes splits and put them into VOC2007 and VOC2012 ImageSets/Main dirs.

Training

We used ResNet101 pretrained model on ImageNet in our experiments. Download it and put it into the data/pretrained_model/.

for example, if you want to train the first split of base and novel class with meta learning, just run:

the first phase

$>CUDA_VISIBLE_DEVICES=0 python train_metarcnn.py --dataset pascal_voc_0712 --epochs 21 --bs 4 --nw 8 --log_dir checkpoint --save_dir models/meta/first --meta_type 1 --meta_train True --meta_loss True 

the second phase

$>CUDA_VISIBLE_DEVICES=0 python train_metarcnn.py --dataset pascal_voc_0712 --epochs 30 --bs 4 --nw 8 --log_dir checkpoint --save_dir models/meta/first --r True --checksession 1 --checkepoch 20 --checkpoint 3081 --phase 2 --shots 10 -meta_train True --meta_loss True --meta_type 1

Testing

if you want to evaluate the performance of meta trained model, simply run:

$>CUDA_VISIBLE_DEVICES=0 python test_metarcnn.py --dataset pascal_voc_0712 --net metarcnn --load_dir models/meta/fisrt  --checksession 10 --checkepoch 30 --checkpoint 111 --shots 10  --meta_type 1 --meta_test True --meta_loss True

we provide the part models with meta training and without meta training in the following: Meta Models and WoMeta Models

Citation

@inproceedings{yanICCV19metarcnn,
    Author = {Yan, Xiaopeng and Chen, Ziliang and Xu, Anni and Wang, Xiaoxi and Liang, Xiaodan and Lin, Liang},
    Title = {Meta R-CNN : Towards General Solver for Instance-level Low-shot Learning.},
    Booktitle = {Proc. of IEEE International Conference on Computer Vision ({ICCV})},
    Year = {2019}
}

Contact

If you have any questions about this repo, please feel free to contact [email protected].

About

Meta R-CNN : Towards General Solver for Instance-level Low-shot Learning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published