MoCo¶
- class mmpretrain.models.selfsup.MoCo(backbone, neck, head, queue_len=65536, feat_dim=128, momentum=0.001, pretrained=None, data_preprocessor=None, init_cfg=None)[source]¶
MoCo.
Implementation of Momentum Contrast for Unsupervised Visual Representation Learning. Part of the code is borrowed from: https://github.com/facebookresearch/moco/blob/master/moco/builder.py.
- Parameters:
backbone (dict) – Config dict for module of backbone.
neck (dict) – Config dict for module of deep features to compact feature vectors.
head (dict) – Config dict for module of head functions.
queue_len (int) – Number of negative keys maintained in the queue. Defaults to 65536.
feat_dim (int) – Dimension of compact feature vectors. Defaults to 128.
momentum (float) – Momentum coefficient for the momentum-updated encoder. Defaults to 0.001.
pretrained (str, optional) – The pretrained checkpoint path, support local path and remote path. Defaults to None.
data_preprocessor (dict, optional) – The config for preprocessing input data. If None or no specified type, it will use “SelfSupDataPreprocessor” as type. See
SelfSupDataPreprocessor
for more details. Defaults to None.init_cfg (Union[List[dict], dict], optional) – Config dict for weight initialization. Defaults to None.
- loss(inputs, data_samples, **kwargs)[source]¶
The forward function in training.
- Parameters:
inputs (List[torch.Tensor]) – The input images.
data_samples (List[DataSample]) – All elements required during the forward function.
- Returns:
A dictionary of loss components.
- Return type:
Dict[str, torch.Tensor]