Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Docs] Resolve warnings in sphinx build #915

Merged
merged 15 commits into from
Feb 8, 2023
Merged
20 changes: 8 additions & 12 deletions docs/en/advanced_tutorials/basedataset.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,24 +19,20 @@ Here is an example of a JSON annotation file (where each raw data info contains
```json

{
'metainfo':
"metainfo":
{
'classes': ('cat', 'dog'),
...
"classes": ["cat", "dog"]
},
'data_list':
"data_list":
[
{
'img_path': "xxx/xxx_0.jpg",
'img_label': 0,
...
"img_path": "xxx/xxx_0.jpg",
"img_label": 0
},
{
'img_path': "xxx/xxx_1.jpg",
'img_label': 1,
...
},
...
"img_path": "xxx/xxx_1.jpg",
"img_label": 1
}
]
}
```
Expand Down
5 changes: 3 additions & 2 deletions docs/en/advanced_tutorials/config.md
Original file line number Diff line number Diff line change
Expand Up @@ -352,15 +352,16 @@ optimizer:

`resnet50_dump.json`

````json
```json
{"optimizer": {"type": "SGD", "lr": 0.02, "momentum": 0.9, "weight_decay": 0.0001}, "model": {"type": "ResNet", "depth": 50}}
```

In addition, `dump` can also dump `cfg` loaded from a dictionary.

```python
cfg = Config(dict(a=1, b=2))
cfg.dump('dump_dict.py')
````
```

`dump_dict.py`

Expand Down
2 changes: 1 addition & 1 deletion docs/en/advanced_tutorials/data_transform.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ processed data as a dictionary. A simple example is as belows:
```{note}
In MMEngine, we don't have the implementations of data transforms. you can find the base data transform class
and many other data transforms in MMCV. So you need to install MMCV before learning this tutorial, see the
{external+mmcv:doc}`MMCV installation guild <get_started/installation>`.
{external+mmcv:doc}`MMCV installation guide <get_started/installation>`.
```

```python
Expand Down
6 changes: 3 additions & 3 deletions docs/en/advanced_tutorials/logging.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

## Flexible Logging System

Logging system is configured by passing a [LogProcessor](mmengine.logging.LogProcessor) to the runner. If no log processor is passed, the runner will use the default log processor, which is equivalent to:
Logging system is configured by passing a [LogProcessor](mmengine.runner.LogProcessor) to the runner. If no log processor is passed, the runner will use the default log processor, which is equivalent to:

```python
log_processor = dict(window_size=10, by_epoch=True, custom_cfg=None, num_digits=4)
Expand Down Expand Up @@ -70,7 +70,7 @@ The significant digits(`num_digits`) of the log is 4 by default.
Output the value of all custom logs at the last iteration by default.
```

```{warnning}
```{warning}
log_processor outputs the epoch based log by default(`by_epoch=True`). To get an expected log matched with the `train_cfg`, we should set the same value for `by_epoch` in `train_cfg` and `log_processor`.
```

Expand Down Expand Up @@ -191,7 +191,7 @@ runner.train()
08/21 03:17:26 - mmengine - INFO - Epoch(train) [1][20/25] lr: 1.0000e-02 eta: 0:00:00 time: 0.0024 data_time: 0.0010 loss1: 0.5464 loss2: 0.7251 loss: 1.2715 loss1_local_max: 2.8872 loss1_global_max: 2.8872
```

More examples can be found in [log_processor](mmengine.logging.LogProcessor).
More examples can be found in [log_processor](mmengine.runner.LogProcessor).

## Customize log

Expand Down
2 changes: 1 addition & 1 deletion docs/en/advanced_tutorials/registry.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Registry

OpenMMLab supports a rich collection of algorithms and datasets, therefore, many modules with similar functionality are implemented. For example, the implementations of `ResNet` and `SE-ResNet` are based on the classes `ResNet` and `SEResNet`, respectively, which have similar functions and interfaces and belong to the model components of the algorithm library. To manage these functionally similar modules, MMEngine implements the [registry](mmengine.registry.registry). Most of the algorithm libraries in OpenMMLab use `registry` to manage their modules, including [MMDetection](https://github.com/open-mmlab/mmdetection), [MMDetection3D](https://github.com/open-mmlab/mmdetection3d), [MMClassification](https://github.com/open-mmlab/mmclassification) and [MMEditing](https://github.com/open-mmlab/mmediting), etc.
OpenMMLab supports a rich collection of algorithms and datasets, therefore, many modules with similar functionality are implemented. For example, the implementations of `ResNet` and `SE-ResNet` are based on the classes `ResNet` and `SEResNet`, respectively, which have similar functions and interfaces and belong to the model components of the algorithm library. To manage these functionally similar modules, MMEngine implements the [registry](mmengine.registry.Registry). Most of the algorithm libraries in OpenMMLab use `registry` to manage their modules, including [MMDetection](https://github.com/open-mmlab/mmdetection), [MMDetection3D](https://github.com/open-mmlab/mmdetection3d), [MMClassification](https://github.com/open-mmlab/mmclassification) and [MMEditing](https://github.com/open-mmlab/mmediting), etc.

## What is a registry

Expand Down
18 changes: 9 additions & 9 deletions docs/en/advanced_tutorials/test_time_augmentation.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,20 +103,20 @@ The following diagram illustrates this sequence of method calls:
After data augmentation with TestTimeAug, the resulting data will have the following format:

```python
image1 = dict(
image1 = dict(
inputs=[data_1_1, data_1_2],
data_sample=[data_sample1_1, data_sample1_2])
data_sample=[data_sample1_1, data_sample1_2]
)

image2 = dict(
image2 = dict(
inputs=[data_2_1, data_2_2],
data_sample=[data_sample2_1, data_sample2_2])
data_sample=[data_sample2_1, data_sample2_2]
)

image3 = dict(
image3 = dict(
inputs=[data_3_1, data_3_2],
data_sample=[data_sample3_1, data_sample3_2])
data_sample=[data_sample3_1, data_sample3_2]
)
```

where `data_{i}_{j}` means the enhanced data,and `data_sample_{i}_{j}` means the ground truth of enhanced data. Then the data will be processed by `Dataloader`, which contributes to the following format:
Expand Down
2 changes: 1 addition & 1 deletion docs/en/advanced_tutorials/visualization.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ def draw_featmap(
# the layout when multiple channels are expanded into multiple images
arrangement: Tuple[int, int] = (5, 2),
# scale the feature map
resize_shapeOptional[tuple] = None,
resize_shape: Optional[tuple] = None,
# overlay ratio between input image and generated feature map
alpha: float = 0.5,
) -> np.ndarray:
Expand Down
1 change: 0 additions & 1 deletion docs/en/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,6 @@
'sphinx.ext.intersphinx',
'sphinx.ext.napoleon',
'sphinx.ext.viewcode',
'sphinx.ext.autosectionlabel',
'myst_parser',
'sphinx_copybutton',
'sphinx.ext.autodoc.typehints',
Expand Down
2 changes: 1 addition & 1 deletion docs/en/design/hook.md
Original file line number Diff line number Diff line change
Expand Up @@ -201,4 +201,4 @@ There are 22 mount points in the [Base Hook](mmengine.hooks.Hook).
- before_save_checkpoint
- after_load_checkpoint

Further readings: [Hook tutorial](../tutorials/hook.md) and [Hook API documentations](mmengine.hooks)
Further readings: [Hook tutorial](../tutorials/hook.md) and [Hook API documentations](../api/hooks)
2 changes: 1 addition & 1 deletion docs/en/design/logging.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@

![image](https://user-images.githubusercontent.com/57566630/163441489-47999f3a-3259-44ab-949c-77a8a599faa5.png)

Each scalar (losses, learning rates, etc.) during training is encapsulated by HistoryBuffer, managed by MessageHub in key-value pairs, formatted by LogProcessor and then exported to various visualization backends by [LoggerHook](mmengine.hook.LoggerHook). **In most cases, statistical methods of these scalars can be configured through the LogProcessor without understanding the data flow.** Before diving into the design of the logging system, please read through [logging tutorial](../advanced_tutorials/logging.md) first for familiarizing basic use cases.
Each scalar (losses, learning rates, etc.) during training is encapsulated by HistoryBuffer, managed by MessageHub in key-value pairs, formatted by LogProcessor and then exported to various visualization backends by [LoggerHook](mmengine.hooks.LoggerHook). **In most cases, statistical methods of these scalars can be configured through the LogProcessor without understanding the data flow.** Before diving into the design of the logging system, please read through [logging tutorial](../advanced_tutorials/logging.md) first for familiarizing basic use cases.

## HistoryBuffer

Expand Down
2 changes: 1 addition & 1 deletion docs/en/design/runner.md
Original file line number Diff line number Diff line change
Expand Up @@ -162,7 +162,7 @@ custom_hooks = [dict(type='CustomValHook')]
## Customize Runner

Moreover, you can write your own runner by subclassing `Runner` if the built-in `Runner` is not feasible.
The method is similar to writing other modules: write your subclass inherited from `Runner`, overrides some functions, register it to [RUNNERS](mmengine.registry.RUNNERS) and access it by assigning `runner_type` in your config file.
The method is similar to writing other modules: write your subclass inherited from `Runner`, overrides some functions, register it to `mmengine.registry.RUNNERS` and access it by assigning `runner_type` in your config file.

```python
from mmengine.registry import RUNNERS
Expand Down
32 changes: 16 additions & 16 deletions docs/en/design/visualization.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,22 +29,22 @@ The external interface of `Visualizer` can be divided into three categories.
- [draw_bboxes](mmengine.visualization.Visualizer.draw_bboxes) draws a single or multiple bounding boxes
- [draw_points](mmengine.visualization.Visualizer.draw_points) draws a single or multiple points
- [draw_texts](mmengine.visualization.Visualizer.draw_texts) draws a single or multiple text boxes
- [draw_lines](mmengine.visualization.Visualizer.lines) draws a single or multiple line segments
- [draw_lines](mmengine.visualization.Visualizer.draw_lines) draws a single or multiple line segments
- [draw_circles](mmengine.visualization.Visualizer.draw_circles) draws a single or multiple circles
- [draw_polygons](mmengine.visualization.Visualizer.draw_polygons) draws a single or multiple polygons
- [draw_binary_masks](mmengine.visualization.Visualizer.draw_binary_mask) draws single or multiple binary masks
- [draw_binary_masks](mmengine.visualization.Visualizer.draw_binary_masks) draws single or multiple binary masks
- [draw_featmap](mmengine.visualization.Visualizer.draw_featmap) draws feature map (**static method**)

The above APIs can be called in a chain except for `draw_featmap` because the image size may change after this method is called. To avoid confusion, `draw_featmap` is a static method.

2. Storage APIs

- [add_config](mmengine.visualization.writer.BaseWriter.add_config) writes configuration to a specific storage backend
- [add_graph](mmengine.visualization.writer.BaseWriter.add_graph) writes model graph to a specific storage backend
- [add_image](mmengine.visualization.writer.BaseWriter.add_image) writes image to a specific storage backend
- [add_scalar](mmengine.visualization.writer.BaseWriter.add_scalar) writes scalar to a specific storage backend
- [add_scalars](mmengine.visualization.writer.BaseWriter.add_scalars) writes multiple scalars to a specific storage backend at once
- [add_datasample](mmengine.visualization.writer.BaseWriter.add_datasample) the abstract interface for each repositories to draw data sample
- [add_config](mmengine.visualization.Visualizer.add_config) writes configuration to a specific storage backend
- [add_graph](mmengine.visualization.Visualizer.add_graph) writes model graph to a specific storage backend
- [add_image](mmengine.visualization.Visualizer.add_image) writes image to a specific storage backend
- [add_scalar](mmengine.visualization.Visualizer.add_scalar) writes scalar to a specific storage backend
- [add_scalars](mmengine.visualization.Visualizer.add_scalars) writes multiple scalars to a specific storage backend at once
- [add_datasample](mmengine.visualization.Visualizer.add_datasample) the abstract interface for each repositories to draw data sample

Interfaces beginning with the `add` prefix represent storage APIs. \[datasample\] (`./data_element.md`)is the unified interface of each downstream repository in the OpenMMLab 2.0, and `add_datasample` can process the data sample directly .

Expand All @@ -56,20 +56,20 @@ Interfaces beginning with the `add` prefix represent storage APIs. \[datasample\
- [get_backend](mmengine.visualization.Visualizer.get_backend) gets a specific storage backend by name
- [close](mmengine.visualization.Visualizer.close) closes all resources, including `VisBackend`

For more details, you can refer to [Visualizer Tutorial](../tutorials/visualization.md).
For more details, you can refer to [Visualizer Tutorial](../advanced_tutorials/visualization.md).

## 3 VisBackend

After drawing, the drawn data can be stored in multiple visualization storage backends. To unify the interfaces, MMEngine provides an abstract class, `BaseVisBackend`, and some commonly used backends such as `LocalVisBackend`, `WandbVisBackend`, and `TensorboardVisBackend`.
The main interfaces and properties of `BaseVisBackend` are as follows:

- [add_config](mmengine.visualization.vis_backend.BaseVisBackend.add_config) writes configuration to a specific storage backend
- [add_graph](mmengine.visualization.vis_backend.BaseVisBackend.add_graph) writes model graph to a specific backend
- [add_image](mmengine.visualization.vis_backend.BaseVisBackend.add_image) writes image to a specific backend
- [add_scalar](mmengine.visualization.vis_backend.BaseVisBackend.add_scalar) writes scalar to a specific backend
- [add_scalars](mmengine.visualization.vis_backend.BaseVisBackend.add_scalars) writes multiple scalars to a specific backend at once
- [close](mmengine.visualization.vis_backend.BaseVisBackend.close) closes the resource that has been opened
- [experiment](mmengine.visualization.vis_backend.BaseVisBackend.experiment) writes backend objects, such as WandB objects and Tensorboard objects
- [add_config](mmengine.visualization.BaseVisBackend.add_config) writes configuration to a specific storage backend
- [add_graph](mmengine.visualization.BaseVisBackend.add_graph) writes model graph to a specific backend
- [add_image](mmengine.visualization.BaseVisBackend.add_image) writes image to a specific backend
- [add_scalar](mmengine.visualization.BaseVisBackend.add_scalar) writes scalar to a specific backend
- [add_scalars](mmengine.visualization.BaseVisBackend.add_scalars) writes multiple scalars to a specific backend at once
- [close](mmengine.visualization.BaseVisBackend.close) closes the resource that has been opened
- [experiment](mmengine.visualization.BaseVisBackend.experiment) writes backend objects, such as WandB objects and Tensorboard objects

`BaseVisBackend` defines five common data writing interfaces. Some writing backends are very powerful, such as WandB, which could write tables and videos. Users can directly obtain the `experiment` object for such needs and then call native APIs of the corresponding backend. `LocalVisBackend`, `WandbVisBackend`, and `TensorboardVisBackend` are all inherited from `BaseVisBackend` and implement corresponding storage functions according to their features. Users can also customize `BaseVisBackend` to extend the storage backends and implement custom storage requirements.

Expand Down
4 changes: 2 additions & 2 deletions docs/en/examples/train_a_gan.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ It will be divided into the following steps:
### Building a Dataset

First, we will build a dataset class `MNISTDataset` for the MNIST dataset, inheriting from the base dataset class [BaseDataset](mmengine.dataset.BaseDataset), and overwrite the `load_data_list` function of the base dataset class to ensure that the return value is a `list[dict]`, where each `dict` represents a data sample.
More details about using datasets in MMEngine, refer to [the Dataset tutorial](../tutorials/basedataset.md).
More details about using datasets in MMEngine, refer to [the Dataset tutorial](../advanced_tutorials/basedataset.md).

```python
import numpy as np
Expand Down Expand Up @@ -262,7 +262,7 @@ model = GAN(generator, discriminator, 100, data_preprocessor)
## Building an Optimizer

MMEngine uses [OptimWrapper](mmengine.optim.OptimWrapper) to wrap optimizers. For multiple optimizers, we use [OptimWrapperDict](mmengine.optim.OptimWrapperDict) to further wrap OptimWrapper.
More details about optimizers, refer to the [Optimizer tutorial](../tutorials/optimizer.md).
More details about optimizers, refer to the [Optimizer tutorial](../tutorials/optim_wrapper.md).

```python
from mmengine.optim import OptimWrapper, OptimWrapperDict
Expand Down
2 changes: 1 addition & 1 deletion docs/en/get_started/introduction.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
## Introduction
# Introduction

Coming soon. Please refer to [chinese documentation](https://mmengine.readthedocs.io/zh_CN/latest/get_started/installation.html).
2 changes: 1 addition & 1 deletion docs/en/migration/model.md
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ runner = Runner(
runner.train()
```

In MMEngine, users can customize their model based on `BaseModel`, which implements the same logic as `OptimizerHook` in `train_step`. For high-level tasks, `train_step` will be called in [train loop](mmengine.runner.loop) with specific arguments, and users do not need to care about the optimization process. For low-level tasks, users can override the `train_step` to customize the optimization process.
In MMEngine, users can customize their model based on `BaseModel`, which implements the same logic as `OptimizerHook` in `train_step`. For high-level tasks, `train_step` will be called in [EpochBasedTrainLoop](mmengine.runner.EpochBasedTrainLoop) or [IterBasedTrainLoop](mmengine.runner.IterBasedTrainLoop) with specific arguments, and users do not need to care about the optimization process. For low-level tasks, users can override the `train_step` to customize the optimization process.

<table class="docutils">
<thead>
Expand Down
2 changes: 1 addition & 1 deletion docs/en/migration/param_scheduler.md
Original file line number Diff line number Diff line change
Expand Up @@ -564,4 +564,4 @@ param_scheduler = [
</thead>
</table>

You may also want to read [parameter scheduler tutorial](../tutorials/param_scheduler.md) or [parameter scheduler API documentations](mmengine.optim.scheduler).
You may also want to read [parameter scheduler tutorial](../tutorials/param_scheduler.md) or [parameter scheduler API documentations](../api/optim).
4 changes: 2 additions & 2 deletions docs/en/migration/runner.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

As MMCV supports more and more deep learning tasks, and users' needs become much more complicated, we have higher requirements for the flexibility and versatility of the existing `Runner` of MMCV. Therefore, MMEngine implements a more general and flexible `Runner` based on MMCV to support more complicated training processes.

The `Runner` in MMEngine expands the scope and takes on more functions. we abstracted [training loop controller (EpochBasedTrainLoop/IterBasedTrainLoop)](mmengine.runner.EpochBasedLoop), [validation loop controller ( ValLoop)](mmengine.runner.ValLoop) and [TestLoop](mmengine.runner.TestLoop) to make it more convenient for users to customize their training process.
The `Runner` in MMEngine expands the scope and takes on more functions. we abstracted [training loop controller (EpochBasedTrainLoop/IterBasedTrainLoop)](mmengine.runner.EpochBasedTrainLoop), [validation loop controller ( ValLoop)](mmengine.runner.ValLoop) and [TestLoop](mmengine.runner.TestLoop) to make it more convenient for users to customize their training process.
C1rN09 marked this conversation as resolved.
Show resolved Hide resolved

Firstly, we will introduce how to migrate the entry point of training from MMCV to MMEngine, to simplify and unify the training script. Then, we'll introduce the difference in the instantiation of `Runner` between MMCV and MMEngine in detail.

Expand Down Expand Up @@ -1165,7 +1165,7 @@ param_scheduler = dict(type='MultiStepLR', milestones=[2, 3], gamma=0.1)

### Prepare testing/validation components

MMCV implements the validation process by `EvalHook`, and we'll not talk too much about it here. Given that validation is a common process in training, MMEngine abstracts validation as two independent modules: [Evaluator](../tutorials/evaluation.md) and [ValLoop](../tutorials/runner.md). We can customize the metric or the validation process by defining a new [loop](mmengine.runner.ValLoop) or a new [metric](mmengine.evaluator.BaseMetirc).
MMCV implements the validation process by `EvalHook`, and we'll not talk too much about it here. Given that validation is a common process in training, MMEngine abstracts validation as two independent modules: [Evaluator](../tutorials/evaluation.md) and [ValLoop](../tutorials/runner.md). We can customize the metric or the validation process by defining a new [loop](mmengine.runner.ValLoop) or a new [metric](mmengine.evaluator.BaseMetric).

```python
import torch
Expand Down
Loading