XCiT¶
摘要¶
Following their success in natural language processing, transformers have recently shown much promise for computer vision. The self-attention operation underlying transformers yields global interactions between all tokens ,i.e. words or image patches, and enables flexible modelling of image data beyond the local interactions of convolutions. This flexibility, however, comes with a quadratic complexity in time and memory, hindering application to long sequences and high-resolution images. We propose a “transposed” version of self-attention that operates across feature channels rather than tokens, where the interactions are based on the cross-covariance matrix between keys and queries. The resulting cross-covariance attention (XCA) has linear complexity in the number of tokens, and allows efficient processing of high-resolution images. Our cross-covariance image transformer (XCiT) is built upon XCA. It combines the accuracy of conventional transformers with the scalability of convolutional architectures. We validate the effectiveness and generality of XCiT by reporting excellent results on multiple vision benchmarks, including image classification and self-supervised feature learning on ImageNet-1k, object detection and instance segmentation on COCO, and semantic segmentation on ADE20k.

结果和模型¶
ImageNet-1k¶
模型 |
预训练 |
参数量(M) |
Flops(G) |
Top-1 (%) |
Top-5 (%) |
配置文件 |
下载 |
---|---|---|---|---|---|---|---|
xcit-nano-12-p16_3rdparty_in1k* |
从头训练 |
3.05 |
0.56 |
70.35 |
89.98 |
||
xcit-nano-12-p16_3rdparty-dist_in1k* |
Distillation |
3.05 |
0.56 |
72.36 |
91.02 |
||
xcit-nano-12-p16_3rdparty-dist_in1k-384px* |
Distillation |
3.05 |
1.64 |
74.93 |
92.42 |
||
xcit-nano-12-p8_3rdparty_in1k* |
从头训练 |
3.05 |
2.16 |
73.80 |
92.08 |
||
xcit-nano-12-p8_3rdparty-dist_in1k* |
Distillation |
3.05 |
2.16 |
76.17 |
93.08 |
||
xcit-nano-12-p8_3rdparty-dist_in1k-384px* |
Distillation |
3.05 |
6.34 |
77.69 |
94.09 |
||
xcit-tiny-12-p16_3rdparty_in1k* |
从头训练 |
6.72 |
1.24 |
77.21 |
93.62 |
||
xcit-tiny-12-p16_3rdparty-dist_in1k* |
Distillation |
6.72 |
1.24 |
78.70 |
94.12 |
||
xcit-tiny-24-p16_3rdparty_in1k* |
从头训练 |
12.12 |
2.34 |
79.47 |
94.85 |
||
xcit-tiny-24-p16_3rdparty-dist_in1k* |
Distillation |
12.12 |
2.34 |
80.51 |
95.17 |
||
xcit-tiny-12-p16_3rdparty-dist_in1k-384px* |
Distillation |
6.72 |
3.64 |
80.58 |
95.38 |
||
xcit-tiny-12-p8_3rdparty_in1k* |
从头训练 |
6.71 |
4.81 |
79.75 |
94.88 |
||
xcit-tiny-12-p8_3rdparty-dist_in1k* |
Distillation |
6.71 |
4.81 |
81.26 |
95.46 |
||
xcit-tiny-24-p8_3rdparty_in1k* |
从头训练 |
12.11 |
9.21 |
81.70 |
95.90 |
||
xcit-tiny-24-p8_3rdparty-dist_in1k* |
Distillation |
12.11 |
9.21 |
82.62 |
96.16 |
||
xcit-tiny-12-p8_3rdparty-dist_in1k-384px* |
Distillation |
6.71 |
14.13 |
82.46 |
96.22 |
||
xcit-tiny-24-p16_3rdparty-dist_in1k-384px* |
Distillation |
12.12 |
6.87 |
82.43 |
96.20 |
||
xcit-tiny-24-p8_3rdparty-dist_in1k-384px* |
Distillation |
12.11 |
27.05 |
83.77 |
96.72 |
||
xcit-small-12-p16_3rdparty_in1k* |
从头训练 |
26.25 |
4.81 |
81.87 |
95.77 |
||
xcit-small-12-p16_3rdparty-dist_in1k* |
Distillation |
26.25 |
4.81 |
83.12 |
96.41 |
||
xcit-small-24-p16_3rdparty_in1k* |
从头训练 |
47.67 |
9.10 |
82.38 |
95.93 |
||
xcit-small-24-p16_3rdparty-dist_in1k* |
Distillation |
47.67 |
9.10 |
83.70 |
96.61 |
||
xcit-small-12-p16_3rdparty-dist_in1k-384px* |
Distillation |
26.25 |
14.14 |
84.74 |
97.19 |
||
xcit-small-12-p8_3rdparty_in1k* |
从头训练 |
26.21 |
18.69 |
83.21 |
96.41 |
||
xcit-small-12-p8_3rdparty-dist_in1k* |
Distillation |
26.21 |
18.69 |
83.97 |
96.81 |
||
xcit-small-24-p16_3rdparty-dist_in1k-384px* |
Distillation |
47.67 |
26.72 |
85.10 |
97.32 |
||
xcit-small-24-p8_3rdparty_in1k* |
从头训练 |
47.63 |
35.81 |
83.62 |
96.51 |
||
xcit-small-24-p8_3rdparty-dist_in1k* |
Distillation |
47.63 |
35.81 |
84.68 |
97.07 |
||
xcit-small-12-p8_3rdparty-dist_in1k-384px* |
Distillation |
26.21 |
54.92 |
85.12 |
97.31 |
||
xcit-small-24-p8_3rdparty-dist_in1k-384px* |
Distillation |
47.63 |
105.24 |
85.57 |
97.60 |
||
xcit-medium-24-p16_3rdparty_in1k* |
从头训练 |
84.40 |
16.13 |
82.56 |
95.82 |
||
xcit-medium-24-p16_3rdparty-dist_in1k* |
Distillation |
84.40 |
16.13 |
84.15 |
96.82 |
||
xcit-medium-24-p16_3rdparty-dist_in1k-384px* |
Distillation |
84.40 |
47.39 |
85.47 |
97.49 |
||
xcit-medium-24-p8_3rdparty_in1k* |
从头训练 |
84.32 |
63.52 |
83.61 |
96.23 |
||
xcit-medium-24-p8_3rdparty-dist_in1k* |
Distillation |
84.32 |
63.52 |
85.00 |
97.16 |
||
xcit-medium-24-p8_3rdparty-dist_in1k-384px* |
Distillation |
84.32 |
186.67 |
85.87 |
97.61 |
||
xcit-large-24-p16_3rdparty_in1k* |
从头训练 |
189.10 |
35.86 |
82.97 |
95.86 |
||
xcit-large-24-p16_3rdparty-dist_in1k* |
Distillation |
189.10 |
35.86 |
84.61 |
97.07 |
||
xcit-large-24-p16_3rdparty-dist_in1k-384px* |
Distillation |
189.10 |
105.35 |
85.78 |
97.60 |
||
xcit-large-24-p8_3rdparty_in1k* |
从头训练 |
188.93 |
141.23 |
84.23 |
96.58 |
||
xcit-large-24-p8_3rdparty-dist_in1k* |
Distillation |
188.93 |
141.23 |
85.14 |
97.32 |
||
xcit-large-24-p8_3rdparty-dist_in1k-384px* |
Distillation |
188.93 |
415.00 |
86.13 |
97.75 |
Models with * are converted from the official repo. The config files of these models are only for inference. We don’t ensure these config files’ training accuracy and welcome you to contribute your reproduction results.
引用¶
@article{el2021xcit,
title={XCiT: Cross-Covariance Image Transformers},
author={El-Nouby, Alaaeldin and Touvron, Hugo and Caron, Mathilde and Bojanowski, Piotr and Douze, Matthijs and Joulin, Armand and Laptev, Ivan and Neverova, Natalia and Synnaeve, Gabriel and Verbeek, Jakob and others},
journal={arXiv preprint arXiv:2106.09681},
year={2021}
}