ImageCaptionInferencer¶
- class mmpretrain.apis.ImageCaptionInferencer(model, pretrained=True, device=None, device_map=None, offload_folder=None, **kwargs)[source]¶
The inferencer for image caption.
- Parameters:
model (BaseModel | str | Config) – A model name or a path to the config file, or a
BaseModel
object. The model name can be found byImageCaptionInferencer.list_models()
and you can also query it in Model Zoo Summary.pretrained (str, optional) – Path to the checkpoint. If None, it will try to find a pre-defined weight from the model you specified (only work if the
model
is a model name). Defaults to None.device (str, optional) – Device to run inference. If None, the available device will be automatically used. Defaults to None.
**kwargs – Other keyword arguments to initialize the model (only work if the
model
is a model name).
Example
>>> from mmpretrain import ImageCaptionInferencer >>> inferencer = ImageCaptionInferencer('blip-base_3rdparty_caption') >>> inferencer('demo/cat-dog.png')[0] {'pred_caption': 'a puppy and a cat sitting on a blanket'}
- __call__(images, return_datasamples=False, batch_size=1, **kwargs)[source]¶
Call the inferencer.
- Parameters:
images (str | array | list) – The image path or array, or a list of images.
return_datasamples (bool) – Whether to return results as
DataSample
. Defaults to False.batch_size (int) – Batch size. Defaults to 1.
resize (int, optional) – Resize the short edge of the image to the specified length before visualization. Defaults to None.
draw_score (bool) – Whether to draw the prediction scores of prediction categories. Defaults to True.
show (bool) – Whether to display the visualization result in a window. Defaults to False.
wait_time (float) – The display time (s). Defaults to 0, which means “forever”.
show_dir (str, optional) – If not None, save the visualization results in the specified directory. Defaults to None.
- Returns:
The inference results.
- Return type: