Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Docs] Minor fixes in EN docs to remove or replace unicode chars with ascii #1018

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/en/advanced_tutorials/basedataset.md
Original file line number Diff line number Diff line change
Expand Up @@ -424,7 +424,7 @@ from mmengine.registry import DATASETS
class ExampleDatasetWrapper:

def __init__(self, dataset, lazy_init=False, ...):
# Build the source datasetself.dataset
# Build the source dataset (self.dataset)
if isinstance(dataset, dict):
self.dataset = DATASETS.build(dataset)
elif isinstance(dataset, BaseDataset):
Expand Down
24 changes: 12 additions & 12 deletions docs/en/advanced_tutorials/config.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,15 +31,15 @@ wget https://raw.githubusercontent.com/open-mmlab/mmengine/main/docs/resources/c

A valid configuration file should define a set of key-value pairs, and here are a few examples:

Python
Python:

```Python
test_int = 1
test_list = [1, 2, 3]
test_dict = dict(key1='value1', key2=0.1)
```

Json
Json:

```json
{
Expand All @@ -49,7 +49,7 @@ Json:
}
```

YAML
YAML:

```yaml
test_int: 1
Expand Down Expand Up @@ -109,7 +109,7 @@ We can use the `Config` combination with the [Registry](./registry.md) to build

Here is an example of defining optimizers in a configuration file.

`config_sgd.py`
`config_sgd.py`:

```python
optimizer = dict(type='SGD', lr=0.1, momentum=0.9, weight_decay=0.0001)
Expand Down Expand Up @@ -156,13 +156,13 @@ We address these issues with inheritance mechanism, detailed as below.

Here is an example to illustrate the inheritance mechanism.

`optimizer_cfg.py`
`optimizer_cfg.py`:

```python
optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001)
```

`resnet50.py`
`resnet50.py`:

```python
_base_ = ['optimizer_cfg.py']
Expand All @@ -182,13 +182,13 @@ print(cfg.optimizer)

`_base_` is a reserved field for the configuration file. It specifies the inherited base files for the current file. Inheriting multiple files will get all the fields at the same time, but it requires that there are no repeated fields defined in all base files.

`runtime_cfg.py`
`runtime_cfg.py`:

```python
gpu_ids = [0, 1]
```

`resnet50_runtime.py`
`resnet50_runtime.py`:

```python
_base_ = ['optimizer_cfg.py', 'runtime_cfg.py']
Expand All @@ -214,7 +214,7 @@ Sometimes, we want to modify some of the fields in the inherited files. For exam

In this case, you can simply redefine the fields in the new configuration file. Note that since the optimizer field is a dictionary, we only need to redefine the modified fields. This rule also applies to adding fields.

`resnet50_lr0.01.py`
`resnet50_lr0.01.py`:

```python
_base_ = ['optimizer_cfg.py', 'runtime_cfg.py']
Expand Down Expand Up @@ -245,7 +245,7 @@ gpu_ids = [0]

Sometimes we not only want to modify or add the keys, but also want to delete them. In this case, we need to set `_delete_=True` in the target field(`dict`) to delete all the keys that do not appear in the newly defined dictionary.

`resnet50_delete_key.py`
`resnet50_delete_key.py`:

```python
_base_ = ['optimizer_cfg.py', 'runtime_cfg.py']
Expand Down Expand Up @@ -298,7 +298,7 @@ a['type'] = 'MobileNet'

The `Config` is not able to parse such a configuration file (it will raise an error when parsing). The `Config` provides a more `pythonic` way to modify base variables for `python` configuration files.

`modify_base_var.py`
`modify_base_var.py`:

```python
_base_ = ['resnet50.py']
Expand Down Expand Up @@ -335,7 +335,7 @@ optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001)
model = dict(type='ResNet', depth=50)
```

Similarly, we can dump configuration files in `json`, `yaml` format
Similarly, we can dump configuration files in `json`, `yaml` format:

`resnet50_dump.yaml`

Expand Down
10 changes: 5 additions & 5 deletions docs/en/advanced_tutorials/initialize.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,13 +36,13 @@ Currently, we support the following initialization methods:
<tr>
<td><a class="reference internal" href="../api/generated/mmengine.model.TruncNormalInit.html#mmengine.model.TruncNormalInit">TruncNormalInit</a></td>
<td>TruncNormal</td>
<td>Initialize the weight by truncated normal distribution, and initialize the bias with a constantcommonly used for Transformer</td>
<td>Initialize the weight by truncated normal distribution, and initialize the bias with a constant, commonly used for Transformer</td>
</tr>

<tr>
<td><a class="reference internal" href="../api/generated/mmengine.model.UniformInit.html#mmengine.model.UniformInit">UniformInit</a></td>
<td>Uniform</td>
<td>Initialize the weight by uniform distribution, and initialize the bias with a constantcommonly used for convolution</td>
<td>Initialize the weight by uniform distribution, and initialize the bias with a constant, commonly used for convolution</td>
</tr>

<tr>
Expand Down Expand Up @@ -353,7 +353,7 @@ from mmengine.model import normal_init
normal_init(model, mean=0, std=0.01, bias=0)
```

Similarly, we could also use [Kaiming](http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf) initialization and [Xavier](http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf) initialization
Similarly, we could also use [Kaiming](http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf) initialization and [Xavier](http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf) initialization:

```python
from mmengine.model import kaiming_init, xavier_init
Expand Down Expand Up @@ -387,12 +387,12 @@ Currently, MMEngine provide the following initialization function:

<tr>
<td><a class="reference internal" href="../api/generated/mmengine.model.trunc_normal_init.html#mmengine.model.trunc_normal_init">trunc_normal_init</a></td>
<td>Initialize the weight by truncated normal distribution, and initialize the bias with a constantcommonly used for Transformer</td>
<td>Initialize the weight by truncated normal distribution, and initialize the bias with a constant, commonly used for Transformer</td>
</tr>

<tr>
<td><a class="reference internal" href="../api/generated/mmengine.model.uniform_init.html#mmengine.model.uniform_init">uniform_init</a></td>
<td>Initialize the weight by uniform distribution, and initialize the bias with a constantcommonly used for convolution</td>
<td>Initialize the weight by uniform distribution, and initialize the bias with a constant, commonly used for convolution</td>
</tr>

<tr>
Expand Down
4 changes: 2 additions & 2 deletions docs/en/advanced_tutorials/test_time_augmentation.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

Test time augmentation (TTA) is a data augmentation strategy used during the testing phase. It involves applying various augmentations, such as flipping and scaling, to the same image and then merging the predictions of each augmented image to produce a more accurate prediction. To make it easier for users to use TTA, MMEngine provides [BaseTTAModel](mmengine.model.BaseTTAModel) class, which allows users to implement different TTA strategies by simply extending the `BaseTTAModel` class according to their needs.

The core implementation of TTA is usually divided into two parts
The core implementation of TTA is usually divided into two parts:

1. Data augmentation: This part is implemented in MMCV, see the api docs [TestTimeAug](mmcv.transforms.TestTimeAug) for more information.
2. Merge the predictions: The subclasses of `BaseTTAModel` will merge the predictions of enhanced data in the `test_step` method to improve the accuracy of predictions.
Expand Down Expand Up @@ -119,7 +119,7 @@ image3 = dict(
)
```

where `data_{i}_{j}` means the enhanced dataand `data_sample_{i}_{j}` means the ground truth of enhanced data. Then the data will be processed by `Dataloader`, which contributes to the following format:
where `data_{i}_{j}` means the enhanced data, and `data_sample_{i}_{j}` means the ground truth of enhanced data. Then the data will be processed by `Dataloader`, which contributes to the following format:

```python
data_batch = dict(
Expand Down
4 changes: 2 additions & 2 deletions docs/en/migration/runner.md
Original file line number Diff line number Diff line change
Expand Up @@ -1369,14 +1369,14 @@ runner = Runner(
work_dir='./work_dir',
randomness=randomness,
env_cfg=env_cfg,
launcher='none', # 不开启分布式训练
launcher='none',
optim_wrapper=optim_wrapper,
train_dataloader=train_dataloader,
train_cfg=dict(by_epoch=True, max_epochs=5, val_interval=1),
val_dataloader=val_dataloader,
val_evaluator=val_evaluator,
val_cfg=val_cfg,
test_dataloader=val_dataloader, # 假设测试和验证使用相同的数据和评测器
test_dataloader=val_dataloader,
test_evaluator=val_evaluator,
test_cfg=dict(type='TestLoop'),
)
Expand Down
2 changes: 1 addition & 1 deletion docs/en/tutorials/model.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ def train_step(self, data, optim_wrapper):
# Parse the loss dict and return the parsed losses for optimization
# and log_vars for logging
parsed_losses, log_vars = self.parse_losses()
optim_wrapper.update_params(parsed_losses) # 更新参数
optim_wrapper.update_params(parsed_losses)
return log_vars
```

Expand Down
2 changes: 1 addition & 1 deletion docs/en/tutorials/optim_wrapper.md
Original file line number Diff line number Diff line change
Expand Up @@ -243,7 +243,7 @@ As shown in the above example, `OptimWrapperDict` exports learning rates and mom

### Configure the OptimWapper in [Runner](runner.md)

We first need to configure the `optimizer` for the OptimWrapper. MMEngine automatically adds all optimizers in PyTorch to the `OPTIMIZERS` registry, and users can specify the optimizers they need in the form of a `dict`. All supported optimizers in PyTorch are listed [here](https://pytorch.org/docs/stable/optim.html#algorithms). In addition, `DAdaptAdaGrad`, `DAdaptAdam`, and `DAdaptSGD` can be used by installing [dadaptation](https://github.com/facebookresearch/dadaptation). `Lion` optimizer can used by install [lion-pytorch](https://github.com/lucidrains/lion-pytorch)
We first need to configure the `optimizer` for the OptimWrapper. MMEngine automatically adds all optimizers in PyTorch to the `OPTIMIZERS` registry, and users can specify the optimizers they need in the form of a `dict`. All supported optimizers in PyTorch are listed [here](https://pytorch.org/docs/stable/optim.html#algorithms). In addition, `DAdaptAdaGrad`, `DAdaptAdam`, and `DAdaptSGD` can be used by installing [dadaptation](https://github.com/facebookresearch/dadaptation). `Lion` optimizer can used by install [lion-pytorch](https://github.com/lucidrains/lion-pytorch).

Now we take setting up a SGD OptimWrapper as an example.

Expand Down
4 changes: 2 additions & 2 deletions docs/en/tutorials/param_scheduler.md
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ Note that the `begin` and `end` parameters are added here. These two parameters

In the above example, the `by_epoch` of `LinearLR` in the warm-up phase is False, which means that the scheduler only takes effect in the first 50 iterations. After more than 50 iterations, the scheduler will no longer take effect, and the second scheduler, which is `MultiStepLR`, will control the learning rate. When combining different schedulers, the `by_epoch` parameter does not have to be the same for each scheduler.

Here is another example
Here is another example:

```python
param_scheduler = [
Expand Down Expand Up @@ -200,7 +200,7 @@ param_scheduler = [

MMEngine also provides a set of generic parameter schedulers for scheduling other hyperparameters in the `param_groups` of the optimizer. Change `LR` in the class name of the learning rate scheduler to `Param`, such as `LinearParamScheduler`. Users can schedule the specific hyperparameters by setting the `param_name` variable of the scheduler.

Here is an example
Here is an example:

```python
param_scheduler = [
Expand Down