Shortcuts

ConvMixer

class mmpretrain.models.backbones.ConvMixer(arch='768/32', in_channels=3, norm_cfg={'type': 'BN'}, act_cfg={'type': 'GELU'}, out_indices=-1, frozen_stages=0, init_cfg=None)[source]

ConvMixer. .

A PyTorch implementation of : Patches Are All You Need?

Modified from the official repo and timm.

Parameters:
  • arch (str | dict) –

    The model’s architecture. If string, it should be one of architecture in ConvMixer.arch_settings. And if dict, it should include the following two keys:

    • embed_dims (int): The dimensions of patch embedding.

    • depth (int): Number of repetitions of ConvMixer Layer.

    • patch_size (int): The patch size.

    • kernel_size (int): The kernel size of depthwise conv layers.

    Defaults to ‘768/32’.

  • in_channels (int) – Number of input image channels. Defaults to 3.

  • patch_size (int) – The size of one patch in the patch embed layer. Defaults to 7.

  • norm_cfg (dict) – The config dict for norm layers. Defaults to dict(type='BN').

  • act_cfg (dict) – The config dict for activation after each convolution. Defaults to dict(type='GELU').

  • out_indices (Sequence | int) – Output from which stages. Defaults to -1, means the last stage.

  • frozen_stages (int) – Stages to be frozen (all param fixed). Defaults to 0, which means not freezing any parameters.

  • init_cfg (dict, optional) – Initialization config dict.

Read the Docs v: latest
Versions
latest
stable
mmcls-1.x
mmcls-0.x
dev
Downloads
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.