Shortcuts

ShiftWindowMSA

class mmpretrain.models.utils.ShiftWindowMSA(embed_dims, num_heads, window_size, shift_size=0, dropout_layer={'drop_prob': 0.0, 'type': 'DropPath'}, pad_small_map=False, window_msa=<class 'mmpretrain.models.utils.attention.WindowMSA'>, init_cfg=None, **kwargs)[source]

Shift Window Multihead Self-Attention Module.

Parameters:
  • embed_dims (int) – Number of input channels.

  • num_heads (int) – Number of attention heads.

  • window_size (int) – The height and width of the window.

  • shift_size (int, optional) – The shift step of each window towards right-bottom. If zero, act as regular window-msa. Defaults to 0.

  • dropout_layer (dict, optional) – The dropout_layer used before output. Defaults to dict(type=’DropPath’, drop_prob=0.).

  • pad_small_map (bool) – If True, pad the small feature map to the window size, which is common used in detection and segmentation. If False, avoid shifting window and shrink the window size to the size of feature map, which is common used in classification. Defaults to False.

  • window_msa (Callable) – To build a window multi-head attention module. Defaults to WindowMSA.

  • init_cfg (dict, optional) – The extra config for initialization. Defaults to None.

  • **kwargs – Other keyword arguments to build the window multi-head attention module.

Read the Docs v: latest
Versions
latest
stable
mmcls-1.x
mmcls-0.x
dev
Downloads
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.