备注
您正在阅读 MMClassification 0.x 版本的文档。MMClassification 0.x 会在 2022 年末被切换为次要分支。建议您升级到 MMClassification 1.0 版本,体验更多新特性和新功能。请查阅 MMClassification 1.0 的安装教程、迁移教程以及更新日志。
Visual Attention Network¶
Abstract¶
While originally designed for natural language processing (NLP) tasks, the self-attention mechanism has recently taken various computer vision areas by storm. However, the 2D nature of images brings three challenges for applying self-attention in computer vision. (1) Treating images as 1D sequences neglects their 2D structures. (2) The quadratic complexity is too expensive for high-resolution images. (3) It only captures spatial adaptability but ignores channel adaptability. In this paper, we propose a novel large kernel attention (LKA) module to enable self-adaptive and long-range correlations in self-attention while avoiding the above issues. We further introduce a novel neural network based on LKA, namely Visual Attention Network (VAN). While extremely simple and efficient, VAN outperforms the state-of-the-art vision transformers and convolutional neural networks with a large margin in extensive experiments, including image classification, object detection, semantic segmentation, instance segmentation, etc.

Results and models¶
ImageNet-1k¶
Model |
Pretrain |
resolution |
Params(M) |
Flops(G) |
Top-1 (%) |
Top-5 (%) |
Config |
Download |
---|---|---|---|---|---|---|---|---|
VAN-B0* |
From scratch |
224x224 |
4.11 |
0.88 |
75.41 |
93.02 |
||
VAN-B1* |
From scratch |
224x224 |
13.86 |
2.52 |
81.01 |
95.63 |
||
VAN-B2* |
From scratch |
224x224 |
26.58 |
5.03 |
82.80 |
96.21 |
||
VAN-B3* |
From scratch |
224x224 |
44.77 |
8.99 |
83.86 |
96.73 |
||
VAN-B4* |
From scratch |
224x224 |
60.28 |
12.22 |
84.13 |
96.86 |
*Models with * are converted from the official repo. The config files of these models are only for validation. We don’t ensure these config files’ training accuracy and welcome you to contribute your reproduction results.
Pre-trained Models¶
The pre-trained models on ImageNet-21k are used to fine-tune on the downstream tasks.
Model |
Pretrain |
resolution |
Params(M) |
Flops(G) |
Download |
---|---|---|---|---|---|
VAN-B4* |
ImageNet-21k |
224x224 |
60.28 |
12.22 |
|
VAN-B5* |
ImageNet-21k |
224x224 |
89.97 |
17.21 |
|
VAN-B6* |
ImageNet-21k |
224x224 |
283.9 |
55.28 |
*Models with * are converted from the official repo.
Citation¶
@article{guo2022visual,
title={Visual Attention Network},
author={Guo, Meng-Hao and Lu, Cheng-Ze and Liu, Zheng-Ning and Cheng, Ming-Ming and Hu, Shi-Min},
journal={arXiv preprint arXiv:2202.09741},
year={2022}
}