ResMLP: Feedforward networks for image classification with data-efficient training, arxiv
PaddlePaddle training/validation code and pretrained models for ResMLP.
The official and 3rd party pytorch implementation are here and here.
This implementation is developed by PPViT.
-
Update (2020-09-27): Model FLOPs and # params are uploaded.
-
Update (2020-09-24): Update new ResMLP weights.
-
Update (2020-09-23): Add new ResMLP weights.
-
Update (2020-08-11): Code is released and ported weights are uploaded.
Original:
Model | Acc@1 | Acc@5 | #Params | FLOPs | Image Size | Crop_pct | Interpolation | Link |
---|---|---|---|---|---|---|---|---|
resmlp_24_224 | 79.38 | 94.55 | 30.0M | 6.0G | 224 | 0.875 | bicubic | google/baidu(jdcx) |
resmlp_36_224 | 79.77 | 94.89 | 44.7M | 9.0G | 224 | 0.875 | bicubic | google/baidu(33w3) |
resmlp_big_24_224 | 81.04 | 95.02 | 129.1M | 100.7G | 224 | 0.875 | bicubic | google/baidu(r9kb) |
resmlp_12_distilled_224 | 77.95 | 93.56 | 15.3M | 3.0G | 224 | 0.875 | bicubic | google/baidu(ghyp) |
resmlp_24_distilled_224 | 80.76 | 95.22 | 30.0M | 6.0G | 224 | 0.875 | bicubic | google/baidu(sxnx) |
resmlp_36_distilled_224 | 81.15 | 95.48 | 44.7M | 9.0G | 224 | 0.875 | bicubic | google/baidu(vt85) |
resmlp_big_24_distilled_224 | 83.59 | 96.65 | 129.1M | 100.7G | 224 | 0.875 | bicubic | google/baidu(4jk5) |
resmlp_big_24_22k_224 | 84.40 | 97.11 | 129.1M | 100.7G | 224 | 0.875 | bicubic | google/baidu(ve7i) |
*The results are evaluated on ImageNet2012 validation set.
Note: ResMLP weights are ported from timm and facebookresearch
We provide a few notebooks in aistudio to help you get started:
*(coming soon)*
- Python>=3.6
- yaml>=0.2.5
- PaddlePaddle>=2.1.0
- yacs>=0.1.8
ImageNet2012 dataset is used in the following folder structure:
│imagenet/
├──train/
│ ├── n01440764
│ │ ├── n01440764_10026.JPEG
│ │ ├── n01440764_10027.JPEG
│ │ ├── ......
│ ├── ......
├──val/
│ ├── n01440764
│ │ ├── ILSVRC2012_val_00000293.JPEG
│ │ ├── ILSVRC2012_val_00002138.JPEG
│ │ ├── ......
│ ├── ......
To use the model with pretrained weights, download the .pdparam
weight file and change related file paths in the following python scripts. The model config files are located in ./configs/
.
For example, assume the downloaded weight file is stored in ./resmlp_24_224.pdparams
, to use the resmlp_24_224
model in python:
from config import get_config
from resmlp import build_res_mlp as build_model
# config files in ./configs/
config = get_config('./configs/resmlp_24_224.yaml')
# build model
model = build_model(config)
# load pretrained weights
model_state_dict = paddle.load('./resmlp_24_224.pdparams')
model.set_dict(model_state_dict)
To evaluate ResMLP model performance on ImageNet2012 with a single GPU, run the following script using command line:
sh run_eval.sh
or
CUDA_VISIBLE_DEVICES=0 \
python main_single_gpu.py \
-cfg=./configs/resmlp_24_224.yaml \
-dataset=imagenet2012 \
-batch_size=16 \
-data_path=/path/to/dataset/imagenet/val \
-eval \
-pretrained=./path/to/pretrained/model/resmlp_24_224 # .pdparams is NOT needed
Run evaluation using multi-GPUs:
sh run_eval_multi.sh
or
CUDA_VISIBLE_DEVICES=0,1,2,3 \
python main_multi_gpu.py \
-cfg=./configs/resmlp_24_224.yaml \
-dataset=imagenet2012 \
-batch_size=16 \
-data_path=/path/to/dataset/imagenet/val \
-eval \
-pretrained=/path/to/pretrained/model/resmlp_24_224 # .pdparams is NOT needed
To train the ResMLP Transformer model on ImageNet2012 with single GPUs, run the following script using command line:
sh run_train.sh
or
CUDA_VISIBLE_DEVICES=0 \
python main_single_gpu.py \
-cfg=./configs/resmlp_24_224.yaml \
-dataset=imagenet2012 \
-batch_size=32 \
-data_path=/path/to/dataset/imagenet/train
Run training using multi-GPUs:
sh run_train_multi.sh
or
CUDA_VISIBLE_DEVICES=0,1,2,3 \
python main_multi_gpu.py \
-cfg=./configs/resmlp_24_224.yaml \
-dataset=imagenet2012 \
-batch_size=16 \
-data_path=/path/to/dataset/imagenet/train
(coming soon)
@article{touvron2021resmlp,
title={Resmlp: Feedforward networks for image classification with data-efficient training},
author={Touvron, Hugo and Bojanowski, Piotr and Caron, Mathilde and Cord, Matthieu and El-Nouby, Alaaeldin and Grave, Edouard and Joulin, Armand and Synnaeve, Gabriel and Verbeek, Jakob and J{\'e}gou, Herv{\'e}},
journal={arXiv preprint arXiv:2105.03404},
year={2021}
}