Shortcuts

CSPDarkNet

class mmpretrain.models.backbones.CSPDarkNet(depth, in_channels=3, out_indices=(4,), frozen_stages=-1, conv_cfg=None, norm_cfg={'eps': 1e-05, 'type': 'BN'}, act_cfg={'inplace': True, 'type': 'LeakyReLU'}, norm_eval=False, init_cfg={'a': 2.23606797749979, 'distribution': 'uniform', 'layer': 'Conv2d', 'mode': 'fan_in', 'nonlinearity': 'leaky_relu', 'type': 'Kaiming'})[source]

CSP-Darknet backbone used in YOLOv4.

Parameters:
  • depth (int) – Depth of CSP-Darknet. Default: 53.

  • in_channels (int) – Number of input image channels. Default: 3.

  • out_indices (Sequence[int]) – Output from which stages. Default: (3, ).

  • frozen_stages (int) – Stages to be frozen (stop grad and set eval mode). -1 means not freezing any parameters. Default: -1.

  • conv_cfg (dict) – Config dict for convolution layer. Default: None.

  • norm_cfg (dict) – Dictionary to construct and config norm layer. Default: dict(type=’BN’, requires_grad=True).

  • act_cfg (dict) – Config dict for activation layer. Default: dict(type=’LeakyReLU’, negative_slope=0.1).

  • norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only.

  • init_cfg (dict or list[dict], optional) – Initialization config dict. Default: None.

Example

>>> from mmpretrain.models import CSPDarkNet
>>> import torch
>>> model = CSPDarkNet(depth=53, out_indices=(0, 1, 2, 3, 4))
>>> model.eval()
>>> inputs = torch.rand(1, 3, 416, 416)
>>> level_outputs = model(inputs)
>>> for level_out in level_outputs:
...     print(tuple(level_out.shape))
...
(1, 64, 208, 208)
(1, 128, 104, 104)
(1, 256, 52, 52)
(1, 512, 26, 26)
(1, 1024, 13, 13)
Read the Docs v: latest
Versions
latest
stable
mmcls-1.x
mmcls-0.x
dev
Downloads
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.