CAE¶
- class mmpretrain.models.selfsup.CAE(backbone, neck, head, target_generator=None, base_momentum=0.0, data_preprocessor=None, init_cfg=None)[source]¶
CAE.
Implementation of Context Autoencoder for Self-Supervised Representation Learning.
- Parameters:
backbone (dict) – Config dict for module of backbone.
neck (dict) – Config dict for module of neck.
head (dict) – Config dict for module of head functions.
target_generator – (dict, optional): The target_generator module to generate targets for self-supervised learning optimization, such as HOG, extracted features from other modules(DALL-E, CLIP), etc.
base_momentum (float) – The base momentum coefficient for the target network. Defaults to 0.0.
data_preprocessor (dict, optional) – The config for preprocessing input data. If None or no specified type, it will use “SelfSupDataPreprocessor” as type. See
SelfSupDataPreprocessor
for more details. Defaults to None.init_cfg (Union[List[dict], dict], optional) – Config dict for weight initialization. Defaults to None.
- loss(inputs, data_samples, **kwargs)[source]¶
The forward function in training.
- Parameters:
inputs (List[torch.Tensor]) – The input images.
data_samples (List[DataSample]) – All elements required during the forward function.
- Returns:
A dictionary of loss components.
- Return type:
Dict[str, torch.Tensor]