Shortcuts

HorNet

class mmpretrain.models.backbones.HorNet(arch='tiny', in_channels=3, drop_path_rate=0.0, scale=0.3333333333333333, use_layer_scale=True, out_indices=(3,), frozen_stages=-1, with_cp=False, gap_before_final_norm=True, init_cfg=None)[source]

HorNet backbone.

A PyTorch implementation of paper HorNet: Efficient High-Order Spatial Interactions with Recursive Gated Convolutions . Inspiration from https://github.com/raoyongming/HorNet

Parameters:
  • arch (str | dict) –

    HorNet architecture.

    If use string, choose from ‘tiny’, ‘small’, ‘base’ and ‘large’. If use dict, it should have below keys:

    • base_dim (int): The base dimensions of embedding.

    • depths (List[int]): The number of blocks in each stage.

    • orders (List[int]): The number of order of gnConv in each

      stage.

    • dw_cfg (List[dict]): The Config for dw conv.

    Defaults to ‘tiny’.

  • in_channels (int) – Number of input image channels. Defaults to 3.

  • drop_path_rate (float) – Stochastic depth rate. Defaults to 0.

  • scale (float) – Scaling parameter of gflayer outputs. Defaults to 1/3.

  • use_layer_scale (bool) – Whether to use use_layer_scale in HorNet block. Defaults to True.

  • out_indices (Sequence[int]) – Output from which stages. Default: (3, ).

  • frozen_stages (int) – Stages to be frozen (stop grad and set eval mode). -1 means not freezing any parameters. Defaults to -1.

  • with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Defaults to False.

  • gap_before_final_norm (bool) – Whether to globally average the feature map before the final norm layer. In the official repo, it’s only used in classification task. Defaults to True.

  • init_cfg (dict, optional) – The Config for initialization. Defaults to None.

Read the Docs v: latest
Versions
latest
stable
mmcls-1.x
mmcls-0.x
dev
Downloads
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.