Inference with existing models

This tutorial will show how to use the following APIs:

List available models

list all the models in MMPreTrain.

>>> from mmpretrain import list_models
>>> list_models()

list_models supports Unix filename pattern matching, you can use ** * ** to match any character.

>>> from mmpretrain import list_models
>>> list_models("*convnext-b*21k")

You can use the list_models method of inferencers to get the available models of the correspondding tasks.

>>> from mmpretrain import ImageCaptionInferencer
>>> ImageCaptionInferencer.list_models()

Get a model

you can use get_model get the model.

>>> from mmpretrain import get_model

# Get model without loading pre-trained weight.
>>> model = get_model("convnext-base_in21k-pre_3rdparty_in1k")

# Get model and load the default checkpoint.
>>> model = get_model("convnext-base_in21k-pre_3rdparty_in1k", pretrained=True)

# Get model and load the specified checkpoint.
>>> model = get_model("convnext-base_in21k-pre_3rdparty_in1k", pretrained="your_local_checkpoint_path")

# Get model with extra initialization arguments, for example, modify the num_classes in head.
>>> model = get_model("convnext-base_in21k-pre_3rdparty_in1k", head=dict(num_classes=10))

# Another example, remove the neck and head, and output from stage 1, 2, 3 in backbone
>>> model_headless = get_model("resnet18_8xb32_in1k", head=None, neck=None, backbone=dict(out_indices=(1, 2, 3)))

The obtained model is a usual PyTorch module.

>>> import torch
>>> from mmpretrain import get_model
>>> model = get_model('convnext-base_in21k-pre_3rdparty_in1k', pretrained=True)
>>> x = torch.rand((1, 3, 224, 224))
>>> y = model(x)
>>> print(type(y), y.shape)
<class 'torch.Tensor'> torch.Size([1, 1000])

Inference on given images

Here is an example to inference an image by the ResNet-50 pre-trained classification model.

>>> from mmpretrain import inference_model
>>> image = ''
>>> # If you have no graphical interface, please set `show=False`
>>> result = inference_model('resnet50_8xb32_in1k', image, show=True)
>>> print(result['pred_class'])
sea snake

The inference_model API is only for demo and cannot keep the model instance or inference on multiple samples. You can use the inferencers for multiple calling.

>>> from mmpretrain import ImageClassificationInferencer
>>> image = ''
>>> inferencer = ImageClassificationInferencer('resnet50_8xb32_in1k')
>>> # Note that the inferencer output is a list of result even if the input is a single sample.
>>> result = inferencer('')[0]
>>> print(result['pred_class'])
sea snake
>>> # You can also use is for multiple images.
>>> image_list = ['demo/demo.JPEG', 'demo/bird.JPEG'] * 16
>>> results = inferencer(image_list, batch_size=8)
>>> print(len(results))
>>> print(results[1]['pred_class'])
house finch, linnet, Carpodacus mexicanus

Usually, the result for every sample is a dictionary. For example, the image classification result is a dictionary containing pred_label, pred_score, pred_scores and pred_class as follows:

    "pred_label": 65,
    "pred_score": 0.6649366617202759,
    "pred_class":"sea snake",
    "pred_scores": array([..., 0.6649366617202759, ...], dtype=float32)

You can configure the inferencer by arguments, for example, use your own config file and checkpoint to inference images by CUDA.

>>> from mmpretrain import ImageClassificationInferencer
>>> image = ''
>>> config = 'configs/resnet/'
>>> checkpoint = ''
>>> inferencer = ImageClassificationInferencer(model=config, pretrained=checkpoint, device='cuda')
>>> result = inferencer(image)[0]
>>> print(result['pred_class'])
sea snake

Inference by a Gradio demo

We also provide a gradio demo for all supported tasks and you can find it in projects/gradio_demo/

Please install gradio by pip install -U gradio at first.

Here is the interface preview:

Extract Features From Image

Compared with model.extract_feat, FeatureExtractor is used to extract features from the image files directly, instead of a batch of tensors. In a word, the input of model.extract_feat is torch.Tensor, the input of FeatureExtractor is images.

>>> from mmpretrain import FeatureExtractor, get_model
>>> model = get_model('resnet50_8xb32_in1k', backbone=dict(out_indices=(0, 1, 2, 3)))
>>> extractor = FeatureExtractor(model)
>>> features = extractor('')[0]
>>> features[0].shape, features[1].shape, features[2].shape, features[3].shape
(torch.Size([256]), torch.Size([512]), torch.Size([1024]), torch.Size([2048]))
Read the Docs v: dev
On Read the Docs
Project Home

Free document hosting provided by Read the Docs.