Shortcuts

VQKD

class mmpretrain.models.selfsup.VQKD(encoder_config, decoder_config=None, num_embed=8192, embed_dims=32, decay=0.99, beta=1.0, quantize_kmeans_init=True, init_cfg=None)[source]

Vector-Quantized Knowledge Distillation.

The module only contains encoder and VectorQuantizer part Modified from https://github.com/microsoft/unilm/blob/master/beit2/modeling_vqkd.py

Parameters:
  • encoder_config (dict) – The config of encoder.

  • decoder_config (dict, optional) – The config of decoder. Currently, VQKD only support to build encoder. Defaults to None.

  • num_embed (int) – Number of embedding vectors in the codebook. Defaults to 8192.

  • embed_dims (int) – The dimension of embedding vectors in the codebook. Defaults to 32.

  • decay (float) – The decay parameter of EMA. Defaults to 0.99.

  • beta (float) – The mutiplier for VectorQuantizer loss. Defaults to 1.

  • quantize_kmeans_init (bool) – Whether to use k-means to initialize the VectorQuantizer. Defaults to True.

  • init_cfg (dict or List[dict], optional) – Initialization config dict. Defaults to None.

encode(x)[source]

Encode the input images and get corresponding results.

forward(x)[source]

The forward function.

Currently, only support to get tokens.

get_tokens(x)[source]

Get tokens for beit pre-training.

Read the Docs v: latest
Versions
latest
stable
mmcls-1.x
mmcls-0.x
dev
Downloads
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.