class mmpretrain.models.selfsup.BaseSelfSupervisor(backbone, neck=None, head=None, target_generator=None, pretrained=None, data_preprocessor=None, init_cfg=None)[source]

BaseModel for Self-Supervised Learning.

All self-supervised algorithms should inherit this module.

  • backbone (dict) – The backbone module. See mmpretrain.models.backbones.

  • neck (dict, optional) – The neck module to process features from backbone. See mmpretrain.models.necks. Defaults to None.

  • head (dict, optional) – The head module to do prediction and calculate loss from processed features. See mmpretrain.models.heads. Notice that if the head is not set, almost all methods cannot be used except extract_feat(). Defaults to None.

  • target_generator – (dict, optional): The target_generator module to generate targets for self-supervised learning optimization, such as HOG, extracted features from other modules(DALL-E, CLIP), etc.

  • pretrained (str, optional) – The pretrained checkpoint path, support local path and remote path. Defaults to None.

  • data_preprocessor (Union[dict, nn.Module], optional) – The config for preprocessing input data. If None or no specified type, it will use “SelfSupDataPreprocessor” as type. See SelfSupDataPreprocessor for more details. Defaults to None.

  • init_cfg (dict, optional) – the config to control the initialization. Defaults to None.


Extract features from the input tensor with shape (N, C, …).

The default behavior is extracting features from backbone.


inputs (Tensor) – A batch of inputs. The shape of it should be (num_samples, num_channels, *img_shape).


The output feature tensor(s).

Return type:

tuple | Tensor

forward(inputs, data_samples=None, mode='tensor')[source]

The unified entry for a forward process in both training and test.

The method currently accepts two modes: “tensor” and “loss”:

  • “tensor”: Forward the backbone network and return the feature tensor(s) tensor without any post-processing, same as a common PyTorch Module.

  • “loss”: Forward and return a dict of losses according to the given inputs and data samples.

  • inputs (torch.Tensor or List[torch.Tensor]) – The input tensor with shape (N, C, …) in general.

  • data_samples (List[DataSample], optional) – The other data of every samples. It’s required for some algorithms if mode="loss". Defaults to None.

  • mode (str) – Return what kind of value. Defaults to ‘tensor’.


The return type depends on mode.

  • If mode="tensor", return a tensor or a tuple of tensor.

  • If mode="loss", return a dict of tensor.


Get the layer-wise depth of a parameter.


param_name (str) – The name of the parameter.


The layer-wise depth and the max depth.

Return type:

Tuple[int, int]

abstract loss(inputs, data_samples)[source]

Calculate losses from a batch of inputs and data samples.

This is a abstract method, and subclass should overwrite this methods if needed.

  • inputs (torch.Tensor) – The input tensor with shape (N, C, …) in general.

  • data_samples (List[DataSample]) – The annotation data of every samples.


A dictionary of loss components.

Return type:

dict[str, Tensor]

property with_head

Check if the model has a head module.

property with_neck

Check if the model has a neck module.

property with_target_generator

Check if the model has a target_generator module.

Read the Docs v: latest
On Read the Docs
Project Home

Free document hosting provided by Read the Docs.