CSPResNet¶
- class mmpretrain.models.backbones.CSPResNet(depth, in_channels=3, out_indices=(3,), frozen_stages=-1, deep_stem=False, conv_cfg=None, norm_cfg={'eps': 1e-05, 'type': 'BN'}, act_cfg={'inplace': True, 'type': 'LeakyReLU'}, norm_eval=False, init_cfg={'layer': 'Conv2d', 'type': 'Kaiming'})[source]¶
CSP-ResNet backbone.
- Parameters:
depth (int) – Depth of CSP-ResNet. Default: 50.
out_indices (Sequence[int]) – Output from which stages. Default: (4, ).
frozen_stages (int) – Stages to be frozen (stop grad and set eval mode). -1 means not freezing any parameters. Default: -1.
conv_cfg (dict) – Config dict for convolution layer. Default: None.
norm_cfg (dict) – Dictionary to construct and config norm layer. Default: dict(type=’BN’, requires_grad=True).
act_cfg (dict) – Config dict for activation layer. Default: dict(type=’LeakyReLU’, negative_slope=0.1).
norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only.
init_cfg (dict or list[dict], optional) – Initialization config dict. Default: None.
Example
>>> from mmpretrain.models import CSPResNet >>> import torch >>> model = CSPResNet(depth=50, out_indices=(0, 1, 2, 3)) >>> model.eval() >>> inputs = torch.rand(1, 3, 416, 416) >>> level_outputs = model(inputs) >>> for level_out in level_outputs: ... print(tuple(level_out.shape)) ... (1, 128, 104, 104) (1, 256, 52, 52) (1, 512, 26, 26) (1, 1024, 13, 13)