Shortcuts

MoCoV3

Abstract

This paper does not describe a novel method. Instead, it studies a straightforward, incremental, yet must-know baseline given the recent progress in computer vision: self-supervised learning for Vision Transformers (ViT). While the training recipes for standard convolutional networks have been highly mature and robust, the recipes for ViT are yet to be built, especially in the self-supervised scenarios where training becomes more challenging. In this work, we go back to basics and investigate the effects of several fundamental components for training self-supervised ViT. We observe that instability is a major issue that degrades accuracy, and it can be hidden by apparently good results. We reveal that these results are indeed partial failure, and they can be improved when training is made more stable. We benchmark ViT results in MoCo v3 and several other self-supervised frameworks, with ablations in various aspects. We discuss the currently positive evidence as well as challenges and open questions. We hope that this work will provide useful data points and experience for future research.

How to use it?

from mmpretrain import inference_model

predict = inference_model('resnet50_mocov3-100e-pre_8xb128-linear-coslr-90e_in1k', 'demo/bird.JPEG')
print(predict['pred_class'])
print(predict['pred_score'])

Models and results

Pretrained models

Model

Params (M)

Flops (G)

Config

Download

mocov3_resnet50_8xb512-amp-coslr-100e_in1k

68.01

4.11

config

model | log

mocov3_resnet50_8xb512-amp-coslr-300e_in1k

68.01

4.11

config

model | log

mocov3_resnet50_8xb512-amp-coslr-800e_in1k

68.01

4.11

config

model | log

mocov3_vit-small-p16_16xb256-amp-coslr-300e_in1k

84.27

4.61

config

model | log

mocov3_vit-base-p16_16xb256-amp-coslr-300e_in1k

215.68

17.58

config

model | log

mocov3_vit-large-p16_64xb64-amp-coslr-300e_in1k

652.78

61.60

config

model | log

Image Classification on ImageNet-1k

Model

Pretrain

Params (M)

Flops (G)

Top-1 (%)

Config

Download

resnet50_mocov3-100e-pre_8xb128-linear-coslr-90e_in1k

MOCOV3 100-Epochs

25.56

4.11

69.60

config

model | log

resnet50_mocov3-300e-pre_8xb128-linear-coslr-90e_in1k

MOCOV3 300-Epochs

25.56

4.11

72.80

config

model | log

resnet50_mocov3-800e-pre_8xb128-linear-coslr-90e_in1k

MOCOV3 800-Epochs

25.56

4.11

74.40

config

model | log

vit-small-p16_mocov3-pre_8xb128-linear-coslr-90e_in1k

MOCOV3

22.05

4.61

73.60

config

model | log

vit-base-p16_mocov3-pre_8xb64-coslr-150e_in1k

MOCOV3

86.57

17.58

83.00

config

model | log

vit-base-p16_mocov3-pre_8xb128-linear-coslr-90e_in1k

MOCOV3

86.57

17.58

76.90

config

model | log

vit-large-p16_mocov3-pre_8xb64-coslr-100e_in1k

MOCOV3

304.33

61.60

83.70

config

model | log

Citation

@InProceedings{Chen_2021_ICCV,
    title     = {An Empirical Study of Training Self-Supervised Vision Transformers},
    author    = {Chen, Xinlei and Xie, Saining and He, Kaiming},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    year      = {2021}
}
Read the Docs v: latest
Versions
latest
stable
mmcls-1.x
mmcls-0.x
dev
Downloads
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.