Shortcuts

EfficientFormerClsHead

class mmpretrain.models.heads.EfficientFormerClsHead(num_classes, in_channels, distillation=True, init_cfg={'layer': 'Linear', 'std': 0.01, 'type': 'Normal'}, *args, **kwargs)[源代码]

EfficientFormer classifier head.

参数:
  • num_classes (int) – Number of categories excluding the background category.

  • in_channels (int) – Number of channels in the input feature map.

  • distillation (bool) – Whether use a additional distilled head. Defaults to True.

  • init_cfg (dict) – The extra initialization configs. Defaults to dict(type='Normal', layer='Linear', std=0.01).

forward(feats)[源代码]

The forward process.

loss(feats, data_samples, **kwargs)[源代码]

Calculate losses from the classification score.

参数:
  • feats (tuple[Tensor]) – The features extracted from the backbone. Multiple stage inputs are acceptable but only the last stage will be used to classify. The shape of every item should be (num_samples, num_classes).

  • data_samples (List[DataSample]) – The annotation data of every samples.

  • **kwargs – Other keyword arguments to forward the loss module.

返回:

a dictionary of loss components

返回类型:

dict[str, Tensor]

pre_logits(feats)[源代码]

The process before the final classification head.

The input feats is a tuple of tensor, and each tensor is the feature of a backbone stage. In :obj`EfficientFormerClsHead`, we just obtain the feature of the last stage.

Read the Docs v: latest
Versions
latest
stable
mmcls-1.x
mmcls-0.x
dev
Downloads
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.