Shortcuts

DeiT III: Revenge of the ViT

摘要

A Vision Transformer (ViT) is a simple neural architecture amenable to serve several computer vision tasks. It has limited built-in architectural priors, in contrast to more recent architectures that incorporate priors either about the input data or of specific tasks. Recent works show that ViTs benefit from self-supervised pre-training, in particular BerT-like pre-training like BeiT. In this paper, we revisit the supervised training of ViTs. Our procedure builds upon and simplifies a recipe introduced for training ResNet-50. It includes a new simple data-augmentation procedure with only 3 augmentations, closer to the practice in self-supervised learning. Our evaluations on Image classification (ImageNet-1k with and without pre-training on ImageNet-21k), transfer learning and semantic segmentation show that our procedure outperforms by a large margin previous fully supervised training recipes for ViT. It also reveals that the performance of our ViT trained with supervision is comparable to that of more recent architectures. Our results could serve as better baselines for recent self-supervised approaches demonstrated on ViT.

使用方式

from mmpretrain import inference_model

predict = inference_model('deit3-small-p16_3rdparty_in1k', 'demo/bird.JPEG')
print(predict['pred_class'])
print(predict['pred_score'])

Models and results

Image Classification on ImageNet-1k

模型

预训练

Params (M)

Flops (G)

Top-1 (%)

Top-5 (%)

配置文件

下载

deit3-small-p16_3rdparty_in1k*

从头训练

22.06

4.61

81.35

95.31

config

model

deit3-small-p16_3rdparty_in1k-384px*

从头训练

22.21

15.52

83.43

96.68

config

model

deit3-small-p16_in21k-pre_3rdparty_in1k*

ImageNet-21k

22.06

4.61

83.06

96.77

config

model

deit3-small-p16_in21k-pre_3rdparty_in1k-384px*

ImageNet-21k

22.21

15.52

84.84

97.48

config

model

deit3-medium-p16_3rdparty_in1k*

从头训练

38.85

8.00

82.99

96.22

config

model

deit3-medium-p16_in21k-pre_3rdparty_in1k*

ImageNet-21k

38.85

8.00

84.56

97.19

config

model

deit3-base-p16_3rdparty_in1k*

从头训练

86.59

17.58

83.80

96.55

config

model

deit3-base-p16_3rdparty_in1k-384px*

从头训练

86.88

55.54

85.08

97.25

config

model

deit3-base-p16_in21k-pre_3rdparty_in1k*

ImageNet-21k

86.59

17.58

85.70

97.75

config

model

deit3-base-p16_in21k-pre_3rdparty_in1k-384px*

ImageNet-21k

86.88

55.54

86.73

98.11

config

model

deit3-large-p16_3rdparty_in1k*

从头训练

304.37

61.60

84.87

97.01

config

model

deit3-large-p16_3rdparty_in1k-384px*

从头训练

304.76

191.21

85.82

97.60

config

model

deit3-large-p16_in21k-pre_3rdparty_in1k*

ImageNet-21k

304.37

61.60

86.97

98.24

config

model

deit3-large-p16_in21k-pre_3rdparty_in1k-384px*

ImageNet-21k

304.76

191.21

87.73

98.51

config

model

deit3-huge-p14_3rdparty_in1k*

从头训练

632.13

167.40

85.21

97.36

config

model

deit3-huge-p14_in21k-pre_3rdparty_in1k*

ImageNet-21k

632.13

167.40

87.19

98.26

config

model

Models with * are converted from the official repo. The config files of these models are only for inference. We haven’t reproduce the training results.

引用

@article{Touvron2022DeiTIR,
  title={DeiT III: Revenge of the ViT},
  author={Hugo Touvron and Matthieu Cord and Herve Jegou},
  journal={arXiv preprint arXiv:2204.07118},
  year={2022},
}
Read the Docs v: latest
Versions
latest
stable
mmcls-1.x
mmcls-0.x
dev
Downloads
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.