VQKD¶
- class mmpretrain.models.selfsup.VQKD(encoder_config, decoder_config=None, num_embed=8192, embed_dims=32, decay=0.99, beta=1.0, quantize_kmeans_init=True, init_cfg=None)[源代码]¶
Vector-Quantized Knowledge Distillation.
The module only contains encoder and VectorQuantizer part Modified from https://github.com/microsoft/unilm/blob/master/beit2/modeling_vqkd.py
- 参数:
encoder_config (dict) – The config of encoder.
decoder_config (dict, optional) – The config of decoder. Currently, VQKD only support to build encoder. Defaults to None.
num_embed (int) – Number of embedding vectors in the codebook. Defaults to 8192.
embed_dims (int) – The dimension of embedding vectors in the codebook. Defaults to 32.
decay (float) – The decay parameter of EMA. Defaults to 0.99.
beta (float) – The mutiplier for VectorQuantizer loss. Defaults to 1.
quantize_kmeans_init (bool) – Whether to use k-means to initialize the VectorQuantizer. Defaults to True.
init_cfg (dict or List[dict], optional) – Initialization config dict. Defaults to None.