Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix documentation to match pytorch-lightning #1244

Merged
merged 12 commits into from
Sep 28, 2022
10 changes: 5 additions & 5 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ and discuss it with some of the core team.
* Additionally you can generate an xml report and use VSCode Coverage gutter to identify untested
lines with `./coverage.sh xml`
9. If your contribution introduces a non-negligible change, add it to `CHANGELOG.md` under the "Unreleased" section.
You can already refer to the pull request. In addition, for tracking contributions we are happy if you provide
You can already refer to the pull request. In addition, for tracking contributions we are happy if you provide
your full name (if you want to) and link to your Github handle. Example:
```
- Added new feature XYZ. [#001](https://https://github.com/unit8co/darts/pull/001)
Expand All @@ -77,8 +77,8 @@ To ensure you don't need to worry about formatting and linting when contributing

### Development environment on Mac with Apple Silicon M1 processor (arm64 architecture)

Please follow the procedure described in [INSTALL.md](https://github.com/unit8co/darts/blob/master/INSTALL.md#test-environment-appple-m1-processor)
to set up a x_64 emulated environment. For the development environment, instead of installing Darts with
`pip install darts`, instead go to the darts cloned repo location and install the packages with: `pip install -r requirements/dev-all.txt`.
Please follow the procedure described in [INSTALL.md](https://github.com/unit8co/darts/blob/master/INSTALL.md#test-environment-appple-m1-processor)
to set up a x_64 emulated environment. For the development environment, instead of installing Darts with
`pip install darts`, instead go to the darts cloned repo location and install the packages with: `pip install -r requirements/dev-all.txt`.
If necessary, follow the same steps to setup libomp for lightgbm.
Finally, verify your overall environment setup by successfully running all unitTests with gradlew or pytest.
Finally, verify your overall environment setup by successfully running all unitTests with gradlew or pytest.
8 changes: 4 additions & 4 deletions darts/models/forecasting/block_rnn_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -222,17 +222,17 @@ def __init__(

.. deprecated:: v0.17.0
``torch_device_str`` has been deprecated in v0.17.0 and will be removed in a future version.
Instead, specify this with keys ``"accelerator", "gpus", "auto_select_gpus"`` in your
Instead, specify this with keys ``"accelerator", "devices", "auto_select_gpus"`` in your
``pl_trainer_kwargs`` dict. Some examples for setting the devices inside the ``pl_trainer_kwargs``
dict:

- ``{"accelerator": "cpu"}`` for CPU,
- ``{"accelerator": "gpu", "gpus": [i]}`` to use only GPU ``i`` (``i`` must be an integer),
- ``{"accelerator": "gpu", "gpus": -1, "auto_select_gpus": True}`` to use all available GPUS.
- ``{"accelerator": "gpu", "devices": [i]}`` to use only GPU ``i`` (``i`` must be an integer),
- ``{"accelerator": "gpu", "devices": -1, "auto_select_gpus": True}`` to use all available GPUS.

For more info, see here:
https://pytorch-lightning.readthedocs.io/en/stable/common/trainer.html#trainer-flags , and
https://pytorch-lightning.readthedocs.io/en/stable/advanced/multi_gpu.html#select-gpu-devices
https://pytorch-lightning.readthedocs.io/en/stable/accelerators/gpu_basic.html#train-on-multiple-gpus
force_reset
If set to ``True``, any previously-existing model with the same name will be reset (all checkpoints will
be discarded). Default: ``False``.
Expand Down
8 changes: 4 additions & 4 deletions darts/models/forecasting/nbeats.py
Original file line number Diff line number Diff line change
Expand Up @@ -642,17 +642,17 @@ def __init__(

.. deprecated:: v0.17.0
``torch_device_str`` has been deprecated in v0.17.0 and will be removed in a future version.
Instead, specify this with keys ``"accelerator", "gpus", "auto_select_gpus"`` in your
Instead, specify this with keys ``"accelerator", "devices", "auto_select_gpus"`` in your
``pl_trainer_kwargs`` dict. Some examples for setting the devices inside the ``pl_trainer_kwargs``
dict:

- ``{"accelerator": "cpu"}`` for CPU,
- ``{"accelerator": "gpu", "gpus": [i]}`` to use only GPU ``i`` (``i`` must be an integer),
- ``{"accelerator": "gpu", "gpus": -1, "auto_select_gpus": True}`` to use all available GPUS.
- ``{"accelerator": "gpu", "devices": [i]}`` to use only GPU ``i`` (``i`` must be an integer),
- ``{"accelerator": "gpu", "devices": -1, "auto_select_gpus": True}`` to use all available GPUS.

For more info, see here:
https://pytorch-lightning.readthedocs.io/en/stable/common/trainer.html#trainer-flags , and
https://pytorch-lightning.readthedocs.io/en/stable/advanced/multi_gpu.html#select-gpu-devices
https://pytorch-lightning.readthedocs.io/en/stable/accelerators/gpu_basic.html#train-on-multiple-gpus
force_reset
If set to ``True``, any previously-existing model with the same name will be reset (all checkpoints will
be discarded). Default: ``False``.
Expand Down
8 changes: 4 additions & 4 deletions darts/models/forecasting/nhits.py
Original file line number Diff line number Diff line change
Expand Up @@ -578,17 +578,17 @@ def __init__(

.. deprecated:: v0.17.0
``torch_device_str`` has been deprecated in v0.17.0 and will be removed in a future version.
Instead, specify this with keys ``"accelerator", "gpus", "auto_select_gpus"`` in your
Instead, specify this with keys ``"accelerator", "devices", "auto_select_gpus"`` in your
``pl_trainer_kwargs`` dict. Some examples for setting the devices inside the ``pl_trainer_kwargs``
dict:

- ``{"accelerator": "cpu"}`` for CPU,
- ``{"accelerator": "gpu", "gpus": [i]}`` to use only GPU ``i`` (``i`` must be an integer),
- ``{"accelerator": "gpu", "gpus": -1, "auto_select_gpus": True}`` to use all available GPUS.
- ``{"accelerator": "gpu", "devices": [i]}`` to use only GPU ``i`` (``i`` must be an integer),
- ``{"accelerator": "gpu", "devices": -1, "auto_select_gpus": True}`` to use all available GPUS.

For more info, see here:
https://pytorch-lightning.readthedocs.io/en/stable/common/trainer.html#trainer-flags , and
https://pytorch-lightning.readthedocs.io/en/stable/advanced/multi_gpu.html#select-gpu-devices
https://pytorch-lightning.readthedocs.io/en/stable/accelerators/gpu_basic.html#train-on-multiple-gpus
force_reset
If set to ``True``, any previously-existing model with the same name will be reset (all checkpoints will
be discarded). Default: ``False``.
Expand Down
8 changes: 4 additions & 4 deletions darts/models/forecasting/rnn_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -301,17 +301,17 @@ def __init__(

.. deprecated:: v0.17.0
``torch_device_str`` has been deprecated in v0.17.0 and will be removed in a future version.
Instead, specify this with keys ``"accelerator", "gpus", "auto_select_gpus"`` in your
Instead, specify this with keys ``"accelerator", "devices", "auto_select_gpus"`` in your
``pl_trainer_kwargs`` dict. Some examples for setting the devices inside the ``pl_trainer_kwargs``
dict:

- ``{"accelerator": "cpu"}`` for CPU,
- ``{"accelerator": "gpu", "gpus": [i]}`` to use only GPU ``i`` (``i`` must be an integer),
- ``{"accelerator": "gpu", "gpus": -1, "auto_select_gpus": True}`` to use all available GPUS.
- ``{"accelerator": "gpu", "devices": [i]}`` to use only GPU ``i`` (``i`` must be an integer),
- ``{"accelerator": "gpu", "devices": -1, "auto_select_gpus": True}`` to use all available GPUS.

For more info, see here:
https://pytorch-lightning.readthedocs.io/en/stable/common/trainer.html#trainer-flags , and
https://pytorch-lightning.readthedocs.io/en/stable/advanced/multi_gpu.html#select-gpu-devices
https://pytorch-lightning.readthedocs.io/en/stable/accelerators/gpu_basic.html#train-on-multiple-gpus
force_reset
If set to ``True``, any previously-existing model with the same name will be reset (all checkpoints will
be discarded). Default: ``False``.
Expand Down
8 changes: 4 additions & 4 deletions darts/models/forecasting/tcn_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -343,17 +343,17 @@ def __init__(

.. deprecated:: v0.17.0
``torch_device_str`` has been deprecated in v0.17.0 and will be removed in a future version.
Instead, specify this with keys ``"accelerator", "gpus", "auto_select_gpus"`` in your
Instead, specify this with keys ``"accelerator", "devices", "auto_select_gpus"`` in your
``pl_trainer_kwargs`` dict. Some examples for setting the devices inside the ``pl_trainer_kwargs``
dict:

- ``{"accelerator": "cpu"}`` for CPU,
- ``{"accelerator": "gpu", "gpus": [i]}`` to use only GPU ``i`` (``i`` must be an integer),
- ``{"accelerator": "gpu", "gpus": -1, "auto_select_gpus": True}`` to use all available GPUS.
- ``{"accelerator": "gpu", "devices": [i]}`` to use only GPU ``i`` (``i`` must be an integer),
- ``{"accelerator": "gpu", "devices": -1, "auto_select_gpus": True}`` to use all available GPUS.

For more info, see here:
https://pytorch-lightning.readthedocs.io/en/stable/common/trainer.html#trainer-flags , and
https://pytorch-lightning.readthedocs.io/en/stable/advanced/multi_gpu.html#select-gpu-devices
https://pytorch-lightning.readthedocs.io/en/stable/accelerators/gpu_basic.html#train-on-multiple-gpus
force_reset
If set to ``True``, any previously-existing model with the same name will be reset (all checkpoints will
be discarded). Default: ``False``.
Expand Down
8 changes: 4 additions & 4 deletions darts/models/forecasting/tft_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -782,17 +782,17 @@ def __init__(

.. deprecated:: v0.17.0
``torch_device_str`` has been deprecated in v0.17.0 and will be removed in a future version.
Instead, specify this with keys ``"accelerator", "gpus", "auto_select_gpus"`` in your
Instead, specify this with keys ``"accelerator", "devices", "auto_select_gpus"`` in your
``pl_trainer_kwargs`` dict. Some examples for setting the devices inside the ``pl_trainer_kwargs``
dict:

- ``{"accelerator": "cpu"}`` for CPU,
- ``{"accelerator": "gpu", "gpus": [i]}`` to use only GPU ``i`` (``i`` must be an integer),
- ``{"accelerator": "gpu", "gpus": -1, "auto_select_gpus": True}`` to use all available GPUS.
- ``{"accelerator": "gpu", "devices": [i]}`` to use only GPU ``i`` (``i`` must be an integer),
- ``{"accelerator": "gpu", "devices": -1, "auto_select_gpus": True}`` to use all available GPUS.

For more info, see here:
https://pytorch-lightning.readthedocs.io/en/stable/common/trainer.html#trainer-flags , and
https://pytorch-lightning.readthedocs.io/en/stable/advanced/multi_gpu.html#select-gpu-devices
https://pytorch-lightning.readthedocs.io/en/stable/accelerators/gpu_basic.html#train-on-multiple-gpus
force_reset
If set to ``True``, any previously-existing model with the same name will be reset (all checkpoints will
be discarded). Default: ``False``.
Expand Down
33 changes: 20 additions & 13 deletions darts/models/forecasting/torch_forecasting_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -81,6 +81,10 @@

logger = get_logger(__name__)

# Check whether we are running pytorch-lightning >= 1.7.0 or not:
tokens = pl.__version__.split(".")
pl_170_or_above = int(tokens[0]) >= 1 and int(tokens[1]) >= 7
gdevos010 marked this conversation as resolved.
Show resolved Hide resolved


def _get_checkpoint_folder(work_dir, model_name):
return os.path.join(work_dir, model_name, CHECKPOINTS_FOLDER)
Expand Down Expand Up @@ -171,17 +175,17 @@ def __init__(

.. deprecated:: v0.17.0
``torch_device_str`` has been deprecated in v0.17.0 and will be removed in a future version.
Instead, specify this with keys ``"accelerator", "gpus", "auto_select_gpus"`` in your
Instead, specify this with keys ``"accelerator", "devices", "auto_select_gpus"`` in your
``pl_trainer_kwargs`` dict. Some examples for setting the devices inside the ``pl_trainer_kwargs``
dict:

- ``{"accelerator": "cpu"}`` for CPU,
- ``{"accelerator": "gpu", "gpus": [i]}`` to use only GPU ``i`` (``i`` must be an integer),
- ``{"accelerator": "gpu", "gpus": -1, "auto_select_gpus": True}`` to use all available GPUS.
- ``{"accelerator": "gpu", "devices": [i]}`` to use only GPU ``i`` (``i`` must be an integer),
- ``{"accelerator": "gpu", "devices": -1, "auto_select_gpus": True}`` to use all available GPUS.
gdevos010 marked this conversation as resolved.
Show resolved Hide resolved

For more info, see here:
https://pytorch-lightning.readthedocs.io/en/stable/common/trainer.html#trainer-flags , and
https://pytorch-lightning.readthedocs.io/en/stable/advanced/multi_gpu.html#select-gpu-devices
https://pytorch-lightning.readthedocs.io/en/stable/accelerators/gpu_basic.html#train-on-multiple-gpus
force_reset
If set to ``True``, any previously-existing model with the same name will be reset (all checkpoints will
be discarded). Default: ``False``.
Expand Down Expand Up @@ -297,7 +301,7 @@ def __init__(
pass

# TODO: remove below in the next version ======>
accelerator, gpus, auto_select_gpus = self._extract_torch_devices(
accelerator, devices, auto_select_gpus = self._extract_torch_devices(
gdevos010 marked this conversation as resolved.
Show resolved Hide resolved
torch_device_str
)
# TODO: until here <======
Expand All @@ -324,14 +328,17 @@ def __init__(
# setup trainer parameters from model creation parameters
self.trainer_params = {
"accelerator": accelerator,
"gpus": gpus,
"auto_select_gpus": auto_select_gpus,
"logger": model_logger,
"max_epochs": n_epochs,
"check_val_every_n_epoch": nr_epochs_val_period,
"enable_checkpointing": save_checkpoints,
"callbacks": [cb for cb in [checkpoint_callback] if cb is not None],
}
if pl_170_or_above:
self.trainer_params["devices"] = devices
else:
self.trainer_params["gpus"] = devices

# update trainer parameters with user defined `pl_trainer_kwargs`
if pl_trainer_kwargs is not None:
Expand Down Expand Up @@ -360,7 +367,7 @@ def _extract_torch_devices(
Returns
-------
Tuple
(accelerator, gpus, auto_select_gpus)
(accelerator, devices, auto_select_gpus)
"""

if torch_device_str is None:
Expand All @@ -369,7 +376,7 @@ def _extract_torch_devices(
device_warning = (
"`torch_device_str` is deprecated and will be removed in a coming Darts version. For full support "
"of all torch devices, use PyTorch-Lightnings trainer flags and pass them inside "
"`pl_trainer_kwargs`. Flags of interest are {`accelerator`, `gpus`, `auto_select_gpus`, `devices`}. "
"`pl_trainer_kwargs`. Flags of interest are {`accelerator`, `devices`, `auto_select_gpus`}. "
"For more information, visit "
"https://pytorch-lightning.readthedocs.io/en/stable/common/trainer.html#trainer-flags"
)
Expand All @@ -388,24 +395,24 @@ def _extract_torch_devices(
)
device_split = torch_device_str.split(":")

gpus = None
devices = None
auto_select_gpus = False
accelerator = "gpu" if device_split[0] == "cuda" else device_split[0]

if len(device_split) == 2 and accelerator == "gpu":
gpus = device_split[1]
gpus = [int(gpus)]
devices = device_split[1]
devices = [int(devices)]
elif len(device_split) == 1:
if accelerator == "gpu":
gpus = -1
devices = -1
auto_select_gpus = True
else:
raise_if(
True,
f"unknown torch_device_str `{torch_device_str}`. " + device_warning,
logger,
)
return accelerator, gpus, auto_select_gpus
return accelerator, devices, auto_select_gpus

@classmethod
def _validate_model_params(cls, **kwargs):
Expand Down
8 changes: 4 additions & 4 deletions darts/models/forecasting/transformer_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -431,17 +431,17 @@ def __init__(

.. deprecated:: v0.17.0
``torch_device_str`` has been deprecated in v0.17.0 and will be removed in a future version.
Instead, specify this with keys ``"accelerator", "gpus", "auto_select_gpus"`` in your
Instead, specify this with keys ``"accelerator", "devices", "auto_select_gpus"`` in your
``pl_trainer_kwargs`` dict. Some examples for setting the devices inside the ``pl_trainer_kwargs``
dict:

- ``{"accelerator": "cpu"}`` for CPU,
- ``{"accelerator": "gpu", "gpus": [i]}`` to use only GPU ``i`` (``i`` must be an integer),
- ``{"accelerator": "gpu", "gpus": -1, "auto_select_gpus": True}`` to use all available GPUS.
- ``{"accelerator": "gpu", "devices": [i]}`` to use only GPU ``i`` (``i`` must be an integer),
- ``{"accelerator": "gpu", "devices": -1, "auto_select_gpus": True}`` to use all available GPUS.

For more info, see here:
https://pytorch-lightning.readthedocs.io/en/stable/common/trainer.html#trainer-flags , and
https://pytorch-lightning.readthedocs.io/en/stable/advanced/multi_gpu.html#select-gpu-devices
https://pytorch-lightning.readthedocs.io/en/stable/accelerators/gpu_basic.html#train-on-multiple-gpus
force_reset
If set to ``True``, any previously-existing model with the same name will be reset (all checkpoints will
be discarded). Default: ``False``.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -338,11 +338,11 @@ def test_devices(self):
]

for torch_device, settings in torch_devices:
accelerator, gpus, auto_select_gpus = settings
accelerator, devices, auto_select_gpus = settings
model = RNNModel(12, "RNN", 10, 10, torch_device_str=torch_device)

self.assertEqual(model.trainer_params["accelerator"], accelerator)
self.assertEqual(model.trainer_params["gpus"], gpus)
self.assertEqual(model.trainer_params["devices"], devices)
self.assertEqual(
model.trainer_params["auto_select_gpus"], auto_select_gpus
)
Expand Down
5 changes: 3 additions & 2 deletions docs/userguide/gpu_and_tpu_usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,14 +83,15 @@ Now the model is ready to start predicting, which won't be shown here since it's

## Use a GPU
GPUs can dramatically improve the performance of your model in terms of processing time. By using an Accelerator in the [Pytorch Lightning Trainer](https://pytorch-lightning.readthedocs.io/en/stable/common/trainer.html#accelerator), we can enjoy the benefits of a GPU. We only need to instruct our model to use our machine's GPU through PyTorch Lightning Trainer parameters, which are expressed as the `pl_trainer_kwargs` dictionary, like this:

```python
my_model = RNNModel(
model="RNN",
...
force_reset=True,
pl_trainer_kwargs={
"accelerator": "gpu",
"gpus": [0]
"devices": [0]
},
)
```
Expand Down Expand Up @@ -170,4 +171,4 @@ Epoch 299: 100% 8/8 [00:00<00:00, 8.52it/s, loss=0.00285, v_num=logs]
<darts.models.forecasting.rnn_model.RNNModel at 0x7ff1b5e4d4d0>
```

From the output we can see that our model is using 4 TPUs.
From the output we can see that our model is using 4 TPUs.
Loading