WindowMSA¶
- class mmpretrain.models.utils.WindowMSA(embed_dims, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0.0, proj_drop=0.0, init_cfg=None)[source]¶
Window based multi-head self-attention (W-MSA) module with relative position bias.
- Parameters:
embed_dims (int) – Number of input channels.
window_size (tuple[int]) – The height and width of the window.
num_heads (int) – Number of attention heads.
qkv_bias (bool, optional) – If True, add a learnable bias to q, k, v. Defaults to True.
qk_scale (float, optional) – Override default qk scale of
head_dim ** -0.5
if set. Defaults to None.attn_drop (float, optional) – Dropout ratio of attention weight. Defaults to 0.
proj_drop (float, optional) – Dropout ratio of output. Defaults to 0.
init_cfg (dict, optional) – The extra config for initialization. Defaults to None.