HorNet¶
- class mmpretrain.models.backbones.HorNet(arch='tiny', in_channels=3, drop_path_rate=0.0, scale=0.3333333333333333, use_layer_scale=True, out_indices=(3,), frozen_stages=-1, with_cp=False, gap_before_final_norm=True, init_cfg=None)[源代码]¶
HorNet backbone.
A PyTorch implementation of paper HorNet: Efficient High-Order Spatial Interactions with Recursive Gated Convolutions . Inspiration from https://github.com/raoyongming/HorNet
- 参数:
HorNet architecture.
If use string, choose from ‘tiny’, ‘small’, ‘base’ and ‘large’. If use dict, it should have below keys:
base_dim (int): The base dimensions of embedding.
depths (List[int]): The number of blocks in each stage.
- orders (List[int]): The number of order of gnConv in each
stage.
dw_cfg (List[dict]): The Config for dw conv.
Defaults to ‘tiny’.
in_channels (int) – Number of input image channels. Defaults to 3.
drop_path_rate (float) – Stochastic depth rate. Defaults to 0.
scale (float) – Scaling parameter of gflayer outputs. Defaults to 1/3.
use_layer_scale (bool) – Whether to use use_layer_scale in HorNet block. Defaults to True.
out_indices (Sequence[int]) – Output from which stages. Default:
(3, )
.frozen_stages (int) – Stages to be frozen (stop grad and set eval mode). -1 means not freezing any parameters. Defaults to -1.
with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Defaults to False.
gap_before_final_norm (bool) – Whether to globally average the feature map before the final norm layer. In the official repo, it’s only used in classification task. Defaults to True.
init_cfg (dict, optional) – The Config for initialization. Defaults to None.