Shortcuts

ImageClassificationInferencer

class mmpretrain.apis.ImageClassificationInferencer(model, pretrained=True, device=None, classes=None, **kwargs)[source]

The inferencer for image classification.

Parameters:
  • model (BaseModel | str | Config) – A model name or a path to the config file, or a BaseModel object. The model name can be found by ImageClassificationInferencer.list_models() and you can also query it in 模型库统计.

  • pretrained (str, optional) – Path to the checkpoint. If None, it will try to find a pre-defined weight from the model you specified (only work if the model is a model name). Defaults to None.

  • device (str, optional) – Device to run inference. If None, the available device will be automatically used. Defaults to None.

  • **kwargs – Other keyword arguments to initialize the model (only work if the model is a model name).

Example

  1. Use a pre-trained model in MMPreTrain to inference an image.

    >>> from mmpretrain import ImageClassificationInferencer
    >>> inferencer = ImageClassificationInferencer('resnet50_8xb32_in1k')
    >>> inferencer('demo/demo.JPEG')
    [{'pred_score': array([...]),
      'pred_label': 65,
      'pred_score': 0.6649367809295654,
      'pred_class': 'sea snake'}]
    
  2. Use a config file and checkpoint to inference multiple images on GPU, and save the visualization results in a folder.

    >>> from mmpretrain import ImageClassificationInferencer
    >>> inferencer = ImageClassificationInferencer(
            model='configs/resnet/resnet50_8xb32_in1k.py',
            pretrained='https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_8xb32_in1k_20210831-ea4938fc.pth',
            device='cuda')
    >>> inferencer(['demo/dog.jpg', 'demo/bird.JPEG'], show_dir="./visualize/")
    
__call__(inputs, return_datasamples=False, batch_size=1, **kwargs)[source]

Call the inferencer.

Parameters:
  • inputs (str | array | list) – The image path or array, or a list of images.

  • return_datasamples (bool) – Whether to return results as DataSample. Defaults to False.

  • batch_size (int) – Batch size. Defaults to 1.

  • resize (int, optional) – Resize the short edge of the image to the specified length before visualization. Defaults to None.

  • rescale_factor (float, optional) – Rescale the image by the rescale factor for visualization. This is helpful when the image is too large or too small for visualization. Defaults to None.

  • draw_score (bool) – Whether to draw the prediction scores of prediction categories. Defaults to True.

  • show (bool) – Whether to display the visualization result in a window. Defaults to False.

  • wait_time (float) – The display time (s). Defaults to 0, which means “forever”.

  • show_dir (str, optional) – If not None, save the visualization results in the specified directory. Defaults to None.

Returns:

The inference results.

Return type:

list

static list_models(pattern=None)[source]

List all available model names.

Parameters:

pattern (str | None) – A wildcard pattern to match model names.

Returns:

a list of model names.

Return type:

List[str]