Shortcuts

DenseCL

摘要

To date, most existing self-supervised learning methods are designed and optimized for image classification. These pre-trained models can be sub-optimal for dense prediction tasks due to the discrepancy between image-level prediction and pixel-level prediction. To fill this gap, we aim to design an effective, dense self-supervised learning method that directly works at the level of pixels (or local features) by taking into account the correspondence between local features. We present dense contrastive learning (DenseCL), which implements self-supervised learning by optimizing a pairwise contrastive (dis)similarity loss at the pixel level between two views of input images.

使用方式

from mmpretrain import inference_model

predict = inference_model('resnet50_densecl-pre_8xb32-linear-steplr-100e_in1k', 'demo/bird.JPEG')
print(predict['pred_class'])
print(predict['pred_score'])

Models and results

Pretrained models

模型

Params (M)

Flops (G)

配置文件

下载

densecl_resnet50_8xb32-coslr-200e_in1k

64.85

4.11

config

model | log

Image Classification on ImageNet-1k

模型

预训练

Params (M)

Flops (G)

Top-1 (%)

配置文件

下载

resnet50_densecl-pre_8xb32-linear-steplr-100e_in1k

DENSECL

25.56

4.11

63.50

config

model | log

引用

@inproceedings{wang2021dense,
  title={Dense contrastive learning for self-supervised visual pre-training},
  author={Wang, Xinlong and Zhang, Rufeng and Shen, Chunhua and Kong, Tao and Li, Lei},
  booktitle={CVPR},
  year={2021}
}
Read the Docs v: latest
Versions
latest
stable
mmcls-1.x
mmcls-0.x
dev
Downloads
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.