Shortcuts

Note

You are reading the documentation for MMClassification 0.x, which will soon be deprecated at the end of 2022. We recommend you upgrade to MMClassification 1.0 to enjoy fruitful new features and better performance brought by OpenMMLab 2.0. Check the installation tutorial, migration tutorial and changelog for more details.

Visual Attention Network

Visual Attention Network

Abstract

While originally designed for natural language processing (NLP) tasks, the self-attention mechanism has recently taken various computer vision areas by storm. However, the 2D nature of images brings three challenges for applying self-attention in computer vision. (1) Treating images as 1D sequences neglects their 2D structures. (2) The quadratic complexity is too expensive for high-resolution images. (3) It only captures spatial adaptability but ignores channel adaptability. In this paper, we propose a novel large kernel attention (LKA) module to enable self-adaptive and long-range correlations in self-attention while avoiding the above issues. We further introduce a novel neural network based on LKA, namely Visual Attention Network (VAN). While extremely simple and efficient, VAN outperforms the state-of-the-art vision transformers and convolutional neural networks with a large margin in extensive experiments, including image classification, object detection, semantic segmentation, instance segmentation, etc.

Results and models

ImageNet-1k

Model

Pretrain

resolution

Params(M)

Flops(G)

Top-1 (%)

Top-5 (%)

Config

Download

VAN-B0*

From scratch

224x224

4.11

0.88

75.41

93.02

config

model

VAN-B1*

From scratch

224x224

13.86

2.52

81.01

95.63

config

model

VAN-B2*

From scratch

224x224

26.58

5.03

82.80

96.21

config

model

VAN-B3*

From scratch

224x224

44.77

8.99

83.86

96.73

config

model

VAN-B4*

From scratch

224x224

60.28

12.22

84.13

96.86

config

model

*Models with * are converted from the official repo. The config files of these models are only for validation. We don’t ensure these config files’ training accuracy and welcome you to contribute your reproduction results.

Pre-trained Models

The pre-trained models on ImageNet-21k are used to fine-tune on the downstream tasks.

Model

Pretrain

resolution

Params(M)

Flops(G)

Download

VAN-B4*

ImageNet-21k

224x224

60.28

12.22

model

VAN-B5*

ImageNet-21k

224x224

89.97

17.21

model

VAN-B6*

ImageNet-21k

224x224

283.9

55.28

model

*Models with * are converted from the official repo.

Citation

@article{guo2022visual,
  title={Visual Attention Network},
  author={Guo, Meng-Hao and Lu, Cheng-Ze and Liu, Zheng-Ning and Cheng, Ming-Ming and Hu, Shi-Min},
  journal={arXiv preprint arXiv:2202.09741},
  year={2022}
}
Read the Docs v: mmcls-0.x
Versions
latest
stable
mmcls-1.x
mmcls-0.x
dev
Downloads
pdf
html
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.