Test¶
For image classification task and image retrieval task, you could test your model after training.
Test with your PC¶
You can use tools/test.py
to test a model on a single machine with a CPU and optionally a GPU.
Here is the full usage of the script:
python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [ARGS]
Note
By default, MMPretrain prefers GPU to CPU. If you want to test a model on CPU, please empty CUDA_VISIBLE_DEVICES
or set it to -1 to make GPU invisible to the program.
CUDA_VISIBLE_DEVICES=-1 python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [ARGS]
ARGS |
Description |
---|---|
|
The path to the config file. |
|
The path to the checkpoint file (It can be a http link, and you can find checkpoints here). |
|
The directory to save the file containing evaluation metrics. |
|
The path to save the file containing test results. |
|
To specify the content of the test results file, and it can be “pred” or “metrics”. If “pred”, save the outputs of the model for offline evaluation. If “metrics”, save the evaluation metrics. Defaults to “pred”. |
|
Override some settings in the used config, the key-value pair in xxx=yyy format will be merged into the config file. If the value to be overwritten is a list, it should be of the form of either |
|
The directory to save the result visualization images. |
|
Visualize the prediction result in a window. |
|
The interval of samples to visualize. |
|
The display time of every window (in seconds). Defaults to 1. |
|
Whether to disable the |
|
Whether to enable the Test-Time-Aug (TTA). If the config file has |
|
Options for job launcher. |
Test with multiple GPUs¶
We provide a shell script to start a multi-GPUs task with torch.distributed.launch
.
bash ./tools/dist_test.sh ${CONFIG_FILE} ${CHECKPOINT_FILE} ${GPU_NUM} [PY_ARGS]
ARGS |
Description |
---|---|
|
The path to the config file. |
|
The path to the checkpoint file (It can be a http link, and you can find checkpoints here). |
|
The number of GPUs to be used. |
|
The other optional arguments of |
You can also specify extra arguments of the launcher by environment variables. For example, change the communication port of the launcher to 29666 by the below command:
PORT=29666 bash ./tools/dist_test.sh ${CONFIG_FILE} ${CHECKPOINT_FILE} ${GPU_NUM} [PY_ARGS]
If you want to startup multiple test jobs and use different GPUs, you can launch them by specifying different port and visible devices.
CUDA_VISIBLE_DEVICES=0,1,2,3 PORT=29500 bash ./tools/dist_test.sh ${CONFIG_FILE1} ${CHECKPOINT_FILE} 4 [PY_ARGS]
CUDA_VISIBLE_DEVICES=4,5,6,7 PORT=29501 bash ./tools/dist_test.sh ${CONFIG_FILE2} ${CHECKPOINT_FILE} 4 [PY_ARGS]
Test with multiple machines¶
Multiple machines in the same network¶
If you launch a test job with multiple machines connected with ethernet, you can run the following commands:
On the first machine:
NNODES=2 NODE_RANK=0 PORT=$MASTER_PORT MASTER_ADDR=$MASTER_ADDR bash tools/dist_test.sh $CONFIG $CHECKPOINT_FILE $GPUS
On the second machine:
NNODES=2 NODE_RANK=1 PORT=$MASTER_PORT MASTER_ADDR=$MASTER_ADDR bash tools/dist_test.sh $CONFIG $CHECKPOINT_FILE $GPUS
Comparing with multi-GPUs in a single machine, you need to specify some extra environment variables:
ENV_VARS |
Description |
---|---|
|
The total number of machines. |
|
The index of the local machine. |
|
The communication port, it should be the same in all machines. |
|
The IP address of the master machine, it should be the same in all machines. |
Usually it is slow if you do not have high speed networking like InfiniBand.
Multiple machines managed with slurm¶
If you run MMPretrain on a cluster managed with slurm, you can use the script tools/slurm_test.sh
.
[ENV_VARS] ./tools/slurm_test.sh ${PARTITION} ${JOB_NAME} ${CONFIG_FILE} ${CHECKPOINT_FILE} [PY_ARGS]
Here are the arguments description of the script.
ARGS |
Description |
---|---|
|
The partition to use in your cluster. |
|
The name of your job, you can name it as you like. |
|
The path to the config file. |
|
The path to the checkpoint file (It can be a http link, and you can find checkpoints here). |
|
The other optional arguments of |
Here are the environment variables can be used to configure the slurm job.
ENV_VARS |
Description |
---|---|
|
The number of GPUs to be used. Defaults to 8. |
|
The number of GPUs to be allocated per node. |
|
The number of CPUs to be allocated per task (Usually one GPU corresponds to one task). Defaults to 5. |
|
The other arguments of |