Shortcuts

WindowMSA

class mmpretrain.models.utils.WindowMSA(embed_dims, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0.0, proj_drop=0.0, init_cfg=None)[source]

Window based multi-head self-attention (W-MSA) module with relative position bias.

Parameters:
  • embed_dims (int) – Number of input channels.

  • window_size (tuple[int]) – The height and width of the window.

  • num_heads (int) – Number of attention heads.

  • qkv_bias (bool, optional) – If True, add a learnable bias to q, k, v. Defaults to True.

  • qk_scale (float, optional) – Override default qk scale of head_dim ** -0.5 if set. Defaults to None.

  • attn_drop (float, optional) – Dropout ratio of attention weight. Defaults to 0.

  • proj_drop (float, optional) – Dropout ratio of output. Defaults to 0.

  • init_cfg (dict, optional) – The extra config for initialization. Defaults to None.

forward(x, mask=None)[source]
Parameters:
  • x (tensor) – input features with shape of (num_windows*B, N, C)

  • mask (tensor, Optional) – mask with shape of (num_windows, Wh*Ww, Wh*Ww), value should be between (-inf, 0].

Read the Docs v: latest
Versions
latest
stable
mmcls-1.x
mmcls-0.x
dev
Downloads
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.