Shortcuts

SparK

class mmpretrain.models.selfsup.SparK(backbone, neck, head, pretrained=None, data_preprocessor=None, input_size=224, downsample_raito=32, mask_ratio=0.6, enc_dec_norm_cfg={'type': 'SparseSyncBatchNorm2d'}, enc_dec_norm_dim=2048, init_cfg=None)[source]

Implementation of SparK.

Implementation of Designing BERT for Convolutional Networks: Sparse and Hierarchical Masked Modeling.

Modified from https://github.com/keyu-tian/SparK/blob/main/pretrain/spark.py

loss(inputs, data_samples, **kwargs)[source]

The forward function in training.

Parameters:
  • inputs (List[torch.Tensor]) – The input images.

  • data_samples (List[DataSample]) – All elements required during the forward function.

Returns:

A dictionary of loss components.

Return type:

Dict[str, torch.Tensor]

mask(shape, device, generator=None)[source]

Mask generation.

Parameters:
  • shape (torch.Size) – The shape of the input images.

  • device (Union[torch.device, str]) – The device of the tensor.

  • generator (torch.Generator, optional) – Generator for random functions. Defaults to None

Returns:

The generated mask.

Return type:

torch.Tensor

Read the Docs v: latest
Versions
latest
stable
mmcls-1.x
mmcls-0.x
dev
Downloads
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.