Mixup¶
- class mmpretrain.models.utils.batch_augments.Mixup(alpha)[source]¶
Mixup batch augmentation.
Mixup is a method to reduces the memorization of corrupt labels and increases the robustness to adversarial examples. It’s proposed in mixup: Beyond Empirical Risk Minimization
- Parameters:
alpha (float) – Parameters for Beta distribution to generate the mixing ratio. It should be a positive number. More details are in the note.
Note
The \(\alpha\) (
alpha
) determines a random distribution \(Beta(\alpha, \alpha)\). For each batch of data, we sample a mixing ratio (marked as \(\lambda\),lam
) from the random distribution.- mix(batch_inputs, batch_scores)[source]¶
Mix the batch inputs and batch one-hot format ground truth.
- Parameters:
batch_inputs (Tensor) – A batch of images tensor in the shape of
(N, C, H, W)
.batch_scores (Tensor) – A batch of one-hot format labels in the shape of
(N, num_classes)
.
- Returns:
The mixed inputs and labels.
- Return type:
Tuple[Tensor, Tensor)