Shortcuts

EfficientNetV2

class mmpretrain.models.backbones.EfficientNetV2(arch='s', in_channels=3, drop_path_rate=0.0, out_indices=(-1,), frozen_stages=0, conv_cfg={'type': 'Conv2dAdaptivePadding'}, norm_cfg={'eps': 0.001, 'momentum': 0.1, 'type': 'BN'}, act_cfg={'type': 'Swish'}, norm_eval=False, with_cp=False, init_cfg=[{'type': 'Kaiming', 'layer': 'Conv2d'}, {'type': 'Constant', 'layer': ['_BatchNorm', 'GroupNorm'], 'val': 1}])[source]

EfficientNetV2 backbone.

A PyTorch implementation of EfficientNetV2 introduced by: EfficientNetV2: Smaller Models and Faster Training

Parameters:
  • arch (str) – Architecture of efficientnetv2. Defaults to s.

  • in_channels (int) – Number of input image channels. Defaults to 3.

  • drop_path_rate (float) – The ratio of the stochastic depth. Defaults to 0.0.

  • out_indices (Sequence[int]) – Output from which stages. Defaults to (-1, ).

  • frozen_stages (int) – Stages to be frozen (all param fixed). Defaults to 0, which means not freezing any parameters.

  • conv_cfg (dict) – Config dict for convolution layer. Defaults to None, which means using conv2d.

  • norm_cfg (dict) – Config dict for normalization layer. Defaults to dict(type=’BN’).

  • act_cfg (dict) – Config dict for activation layer. Defaults to dict(type=’Swish’).

  • norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only. Defaults to False.

  • with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Defaults to False.