Migration¶
We introduce some modifications in MMPretrain 1.x, and some of them are BC-breacking. To migrate your projects from MMClassification 0.x or MMSelfSup 0.x smoothly, please read this tutorial.
New dependencies¶
Warning
MMPretrain 1.x has new package dependencies, and a new environment should be created for MMPretrain 1.x even if you already have a well-rounded MMClassification 0.x or MMSelfSup 0.x environment. Please refer to the installation tutorial for the required package installation or install the packages manually.
MMEngine: MMEngine is the core the OpenMMLab 2.0 architecture, and we have split many compentents unrelated to computer vision from MMCV to MMEngine.
MMCV: The computer vision package of OpenMMLab. This is not a new dependency, but it should be upgraded to version
2.0.0rc1
or above.rich: A terminal formatting package, and we use it to enhance some outputs in the terminal.
einops: Operators for Einstein notations.
General change of config¶
In this section, we introduce the general difference between old version(MMClassification 0.x or MMSelfSup 0.x) and MMPretrain 1.x.
Schedule settings¶
MMCls or MMSelfSup 0.x |
MMPretrain 1.x |
Remark |
---|---|---|
optimizer_config |
/ |
It has been removed. |
/ |
optim_wrapper |
The |
lr_config |
param_scheduler |
The |
runner |
train_cfg |
The loop setting ( |
Changes in optimizer
and optimizer_config
:
Now we use
optim_wrapper
field to specify all configurations related to optimization process. Theoptimizer
has become a subfield ofoptim_wrapper
.The
paramwise_cfg
field is also a subfield ofoptim_wrapper
, instead ofoptimizer
.The
optimizer_config
field has been removed, and all configurations has been moved tooptim_wrapper
.The
grad_clip
field has been renamed toclip_grad
.
Original |
optimizer = dict(
type='AdamW',
lr=0.0015,
weight_decay=0.3,
paramwise_cfg = dict(
norm_decay_mult=0.0,
bias_decay_mult=0.0,
))
optimizer_config = dict(grad_clip=dict(max_norm=1.0))
|
New |
optim_wrapper = dict(
optimizer=dict(type='AdamW', lr=0.0015, weight_decay=0.3),
paramwise_cfg = dict(
norm_decay_mult=0.0,
bias_decay_mult=0.0,
),
clip_grad=dict(max_norm=1.0),
)
|
Changes in lr_config
:
The
lr_config
field has been removed and replaced by the newparam_scheduler
.The
warmup
related arguments have also been removed since we use a combination of schedulers to implement this functionality.
The new scheduler combination mechanism is highly flexible and enables the design of various learning rate/momentum curves. For more details, see the parameter schedulers tutorial.
Original |
lr_config = dict(
policy='CosineAnnealing',
min_lr=0,
warmup='linear',
warmup_iters=5,
warmup_ratio=0.01,
warmup_by_epoch=True)
|
New |
param_scheduler = [
# warmup
dict(
type='LinearLR',
start_factor=0.01,
by_epoch=True,
end=5,
# Update the learning rate after every iters.
convert_to_iter_based=True),
# main learning rate scheduler
dict(type='CosineAnnealingLR', by_epoch=True, begin=5),
]
|
Changes in runner
:
Most of the configurations that were originally in the runner
field have been moved to train_cfg
, val_cfg
, and test_cfg
.
These fields are used to configure the loop for training, validation, and testing.
Original |
runner = dict(type='EpochBasedRunner', max_epochs=100)
|
New |
# The `val_interval` is the original `evaluation.interval`.
train_cfg = dict(by_epoch=True, max_epochs=100, val_interval=1)
val_cfg = dict() # Use the default validation loop.
test_cfg = dict() # Use the default test loop.
|
In OpenMMLab 2.0, we introduced Loop
to control the behaviors in training, validation and testing. As a result, the functionalities of Runner
have also been changed.
More details can be found in the MMEngine tutorials.
Runtime settings¶
Changes in checkpoint_config
and log_config
:
The checkpoint_config
has been moved to default_hooks.checkpoint
, and log_config
has been moved to
default_hooks.logger
. Additionally, many hook settings that were previously included in the script code have
been moved to the default_hooks
field in the runtime configuration.
default_hooks = dict(
# record the time of every iterations.
timer=dict(type='IterTimerHook'),
# print log every 100 iterations.
logger=dict(type='LoggerHook', interval=100),
# enable the parameter scheduler.
param_scheduler=dict(type='ParamSchedulerHook'),
# save checkpoint per epoch, and automatically save the best checkpoint.
checkpoint=dict(type='CheckpointHook', interval=1, save_best='auto'),
# set sampler seed in distributed evrionment.
sampler_seed=dict(type='DistSamplerSeedHook'),
# validation results visualization, set True to enable it.
visualization=dict(type='VisualizationHook', enable=False),
)
In OpenMMLab 2.0, we have split the original logger into logger and visualizer. The logger is used to record information, while the visualizer is used to display the logger in different backends such as terminal, TensorBoard, and Wandb.
Original |
log_config = dict(
interval=100,
hooks=[
dict(type='TextLoggerHook'),
dict(type='TensorboardLoggerHook'),
])
|
New |
default_hooks = dict(
...
logger=dict(type='LoggerHook', interval=100),
)
visualizer = dict(
type='UniversalVisualizer',
vis_backends=[dict(type='LocalVisBackend'), dict(type='TensorboardVisBackend')],
)
|
Changes in load_from
and resume_from
:
The
resume_from
is removed. And we useresume
andload_from
to replace it.If
resume=True
andload_from
is not None, resume training from the checkpoint inload_from
.If
resume=True
andload_from
is None, try to resume from the latest checkpoint in the work directory.If
resume=False
andload_from
is not None, only load the checkpoint, not resume training.If
resume=False
andload_from
is None, do not load nor resume.
the resume_from
field has been removed, and we use resume
and load_from
instead.
If
resume=True
andload_from
is not None, training is resumed from the checkpoint inload_from
.If
resume=True
andload_from
is None, the latest checkpoint in the work directory is used for resuming.If
resume=False
andload_from
is not None, only the checkpoint is loaded without resuming training.If
resume=False
andload_from
is None, neither checkpoint is loaded nor is training resumed.
Changes in dist_params
: The dist_params
field has become a subfield of env_cfg
now.
Additionally, some new configurations have been added to env_cfg
.
env_cfg = dict(
# whether to enable cudnn benchmark
cudnn_benchmark=False,
# set multi process parameters
mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),
# set distributed parameters
dist_cfg=dict(backend='nccl'),
)
Changes in workflow
: workflow
related functionalities are removed.
New field visualizer
: The visualizer is a new design in OpenMMLab 2.0 architecture. The runner uses an
instance of the visualizer to handle result and log visualization, as well as to save to different backends.
For more information, please refer to the MMEngine tutorial.
visualizer = dict(
type='UniversalVisualizer',
vis_backends=[
dict(type='LocalVisBackend'),
# Uncomment the below line to save the log and visualization results to TensorBoard.
# dict(type='TensorboardVisBackend')
]
)
New field default_scope
: The start point to search module for all registries. The default_scope
in MMPretrain is mmpretrain
. See the registry tutorial for more details.
Other changes¶
We moved the definition of all registries in different packages to the mmpretrain.registry
package.
Migration from MMClassification 0.x¶
Config files¶
In MMPretrain 1.x, we refactored the structure of configuration files, and the original files are not usable.
In this section, we will introduce all changes of the configuration files. And we assume you already have ideas of the config files.
Model settings¶
No changes in model.backbone
, model.neck
and model.head
fields.
Changes in model.train_cfg
:
BatchMixup
is renamed toMixup
.BatchCutMix
is renamed toCutMix
.BatchResizeMix
is renamed toResizeMix
.The
prob
argument is removed from all augments settings, and you can use theprobs
field intrain_cfg
to specify probabilities of every augemnts. If noprobs
field, randomly choose one by the same probability.
Original |
model = dict(
...
train_cfg=dict(augments=[
dict(type='BatchMixup', alpha=0.8, num_classes=1000, prob=0.5),
dict(type='BatchCutMix', alpha=1.0, num_classes=1000, prob=0.5)
]
)
|
New |
model = dict(
...
train_cfg=dict(augments=[
dict(type='Mixup', alpha=0.8), dict(type='CutMix', alpha=1.0)]
)
|
Data settings¶
Changes in data
:
The original
data
field is splited totrain_dataloader
,val_dataloader
andtest_dataloader
. This allows us to configure them in fine-grained. For example, you can specify different sampler and batch size during training and test.The
samples_per_gpu
is renamed tobatch_size
.The
workers_per_gpu
is renamed tonum_workers
.
Original |
data = dict(
samples_per_gpu=32,
workers_per_gpu=2,
train=dict(...),
val=dict(...),
test=dict(...),
)
|
New |
train_dataloader = dict(
batch_size=32,
num_workers=2,
dataset=dict(...),
sampler=dict(type='DefaultSampler', shuffle=True) # necessary
)
val_dataloader = dict(
batch_size=32,
num_workers=2,
dataset=dict(...),
sampler=dict(type='DefaultSampler', shuffle=False) # necessary
)
test_dataloader = val_dataloader
|
Changes in pipeline
:
The original formatting transforms
ToTensor
,ImageToTensor
andCollect
are combined asPackInputs
.We don’t recommend to do
Normalize
in the dataset pipeline. Please remove it from pipelines and set it in thedata_preprocessor
field.The argument
flip_prob
inRandomFlip
is renamed toprob
.The argument
size
inRandomCrop
is renamed tocrop_size
.The argument
size
inRandomResizedCrop
is renamed toscale
.The argument
size
inResize
is renamed toscale
. AndResize
won’t support size like(256, -1)
, please useResizeEdge
to replace it.The argument
policies
inAutoAugment
andRandAugment
supports using string to specify preset policies.AutoAugment
supports “imagenet” andRandAugment
supports “timm_increasing”.RandomResizedCrop
andCenterCrop
won’t supportsefficientnet_style
, and please useEfficientNetRandomCrop
andEfficientNetCenterCrop
to replace them.
Note
We move some work of data transforms to the data preprocessor, like normalization, see the documentation for more details.
Original |
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='RandomResizedCrop', size=224),
dict(type='RandomFlip', flip_prob=0.5, direction='horizontal'),
dict(type='Normalize', **img_norm_cfg),
dict(type='ImageToTensor', keys=['img']),
dict(type='ToTensor', keys=['gt_label']),
dict(type='Collect', keys=['img', 'gt_label'])
]
|
New |
data_preprocessor = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='RandomResizedCrop', scale=224),
dict(type='RandomFlip', prob=0.5, direction='horizontal'),
dict(type='PackInputs'),
]
|
Changes in evaluation
:
The
evaluation
field is splited toval_evaluator
andtest_evaluator
. And it won’t supportsinterval
andsave_best
arguments. Theinterval
is moved totrain_cfg.val_interval
, see the schedule settings and thesave_best
is moved todefault_hooks.checkpoint.save_best
, see the runtime settings.The ‘accuracy’ metric is renamed to
Accuracy
.The ‘precision’, ‘recall’, ‘f1-score’ and ‘support’ are combined as
SingleLabelMetric
, and useitems
argument to specify to calculate which metric.The ‘mAP’ is renamed to
AveragePrecision
.The ‘CP’, ‘CR’, ‘CF1’, ‘OP’, ‘OR’, ‘OF1’ are combined as
MultiLabelMetric
, and useitems
andaverage
arguments to specify to calculate which metric.
Original |
evaluation = dict(
interval=1,
metric='accuracy',
metric_options=dict(topk=(1, 5))
)
|
New |
val_evaluator = dict(type='Accuracy', topk=(1, 5))
test_evaluator = val_evaluator
|
Original |
evaluation = dict(
interval=1,
metric=['mAP', 'CP', 'OP', 'CR', 'OR', 'CF1', 'OF1'],
metric_options=dict(thr=0.5),
)
|
New |
val_evaluator = [
dict(type='AveragePrecision'),
dict(type='MultiLabelMetric',
items=['precision', 'recall', 'f1-score'],
average='both',
thr=0.5),
]
test_evaluator = val_evaluator
|
Packages¶
mmpretrain.apis
¶
The documentation can be found here.
Function |
Changes |
---|---|
|
No changes |
|
No changes. But we recommend to use |
|
Removed, use |
|
Removed, use |
|
Removed, use |
|
Removed, use |
|
Removed, use |
|
Removed, use |
mmpretrain.core
¶
The mmpretrain.core
package is renamed to mmpretrain.engine
.
Sub package |
Changes |
---|---|
|
Removed, use the metrics in |
|
Moved to |
|
Moved to |
|
Removed, the distributed environment related functions can be found in the |
|
Removed, the related functionalities are implemented in |
The MMClsWandbHook
in hooks
package is waiting for implementation.
The CosineAnnealingCooldownLrUpdaterHook
in hooks
package is removed, and we support this functionality by
the combination of parameter schedulers, see the tutorial.
mmpretrain.datasets
¶
The documentation can be found here.
Dataset class |
Changes |
---|---|
Add |
|
Same as |
|
Same as |
|
The |
|
The |
|
Requires |
|
Requires |
The mmpretrain.datasets.pipelines
is renamed to mmpretrain.datasets.transforms
.
Transform class |
Changes |
---|---|
|
Removed, use |
|
Removed, use |
|
The argument |
|
The argument |
|
Removed, use |
|
Removed, use |
|
The argument |
|
Removed, use |
mmpretrain.models
¶
The documentation can be found here. The interface of all backbones, necks and losses didn’t change.
Changes in ImageClassifier
:
Method of classifiers |
Changes |
---|---|
|
No changes |
|
Now only accepts three arguments: |
|
Replaced by |
|
Replaced by |
|
The |
|
The original |
|
New method, and it’s the same as |
Changes in heads:
Method of heads |
Changes |
---|---|
|
No changes |
|
Replaced by |
|
Replaced by |
|
It accepts |
|
New method, and it returns the output of the classification head without any post-processs like softmax or sigmoid. |
mmpretrain.utils
¶
Function |
Changes |
---|---|
|
No changes |
|
Removed, use |
|
The output format changed. |
|
Removed, use |
|
Removed, we auto wrap the model in the runner. |
|
Removed, we auto wrap the model in the runner. |
|
Removed, we auto select the device in the runner. |
Migration from MMSelfSup 0.x¶
Config¶
This section illustrates the changes of our config files in the _base_
folder, which includes three parts
Datasets:
configs/_base_/datasets
Models:
configs/_base_/models
Schedules:
configs/_base_/schedules
Dataset settings¶
In MMSelfSup 0.x, we use key data
to summarize all information, such as samples_per_gpu
, train
, val
, etc.
In MMPretrain 1.x, we separate train_dataloader
, val_dataloader
to summarize information correspodingly and the key data
has been removed.
Original |
data = dict(
samples_per_gpu=32, # total 32*8(gpu)=256
workers_per_gpu=4,
train=dict(
type=dataset_type,
data_source=dict(
type=data_source,
data_prefix='data/imagenet/train',
ann_file='data/imagenet/meta/train.txt',
),
num_views=[1, 1],
pipelines=[train_pipeline1, train_pipeline2],
prefetch=prefetch,
),
val=...)
|
New |
train_dataloader = dict(
batch_size=32,
num_workers=4,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=True),
collate_fn=dict(type='default_collate'),
dataset=dict(
type=dataset_type,
data_root=data_root,
ann_file='meta/train.txt',
data_prefix=dict(img_path='train/'),
pipeline=train_pipeline))
val_dataloader = ...
|
Besides, we remove the key of data_source
to keep the pipeline format consistent with that in other OpenMMLab projects. Please refer to Config for more details.
Changes in pipeline
:
Take MAE as an example of pipeline
:
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='RandomResizedCrop',
scale=224,
crop_ratio_range=(0.2, 1.0),
backend='pillow',
interpolation='bicubic'),
dict(type='RandomFlip', prob=0.5),
dict(type='PackInputs')
]
Model settings¶
In the config of models, there are two main different parts from MMSeflSup 0.x.
There is a new key called
data_preprocessor
, which is responsible for preprocessing the data, like normalization, channel conversion, etc. For example:
data_preprocessor=dict(
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
bgr_to_rgb=True)
model = dict(
type='MAE',
data_preprocessor=dict(
mean=[127.5, 127.5, 127.5],
std=[127.5, 127.5, 127.5],
bgr_to_rgb=True),
backbone=...,
neck=...,
head=...,
init_cfg=...)
There is a new key
loss
inhead
in MMPretrain 1.x, to determine the loss function of the algorithm. For example:
model = dict(
type='MAE',
backbone=...,
neck=...,
head=dict(
type='MAEPretrainHead',
norm_pix=True,
patch_size=16,
loss=dict(type='MAEReconstructionLoss')),
init_cfg=...)
Package¶
The table below records the general modification of the folders and files.
MMSelfSup 0.x |
MMPretrain 1.x |
Remark |
---|---|---|
apis |
apis |
The high level APIs are updated. |
core |
engine |
The |
datasets |
datasets |
The datasets is implemented according to different datasets, such as ImageNet, Places205. (API link) |
datasets/data_sources |
/ |
The |
datasets/pipelines |
datasets/transforms |
The |
/ |
evaluation |
The |
models/algorithms |
selfsup |
The algorithms are moved to |
models/backbones |
selfsup |
The re-implemented backbones are moved to corresponding self-supervised learning algorithm |
models/target_generators |
selfsup |
The target generators are moved to corresponding self-supervised learning algorithm |
/ |
models/losses |
The |
/ |
structures |
The |
/ |
visualization |
The |