Skip to content

Commit

Permalink
Merge branch 'master' into docs/tpu
Browse files Browse the repository at this point in the history
  • Loading branch information
hrzn committed Mar 1, 2022
2 parents e1d8570 + 7ca7801 commit f326a1a
Show file tree
Hide file tree
Showing 8 changed files with 120 additions and 107 deletions.
32 changes: 17 additions & 15 deletions darts/models/forecasting/block_rnn_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -181,22 +181,23 @@ def __init__(
This parameter will be ignored for probabilistic models if the ``likelihood`` parameter is specified.
Default: ``torch.nn.MSELoss()``.
likelihood
The likelihood model to be used for probabilistic forecasts.
One of Darts' :meth:`Likelihood <darts.utils.likelihood_models.Likelihood>` models to be used for
probabilistic forecasts. Default: ``None``.
optimizer_cls
The PyTorch optimizer class to be used (default: ``torch.optim.Adam``).
The PyTorch optimizer class to be used. Default: ``torch.optim.Adam``.
optimizer_kwargs
Optionally, some keyword arguments for the PyTorch optimizer (e.g., ``{'lr': 1e-3}``
for specifying a learning rate). Otherwise the default values of the selected ``optimizer_cls``
will be used.
will be used. Default: ``None``.
lr_scheduler_cls
Optionally, the PyTorch learning rate scheduler class to be used. Specifying ``None`` corresponds
to using a constant learning rate.
to using a constant learning rate. Default: ``None``.
lr_scheduler_kwargs
Optionally, some keyword arguments for the PyTorch learning rate scheduler.
Optionally, some keyword arguments for the PyTorch learning rate scheduler. Default: ``None``.
batch_size
Number of time series (input and output sequences) used in each training pass.
Number of time series (input and output sequences) used in each training pass. Default: ``32``.
n_epochs
Number of epochs over which to train the model.
Number of epochs over which to train the model. Default: ``100``.
model_name
Name of the model. Used for creating checkpoints and saving tensorboard data. If not specified,
defaults to the following string ``"YYYY-mm-dd_HH:MM:SS_torch_model_run_PID"``, where the initial part
Expand All @@ -205,13 +206,13 @@ def __init__(
``"2021-06-14_09:53:32_torch_model_run_44607"``.
work_dir
Path of the working directory, where to save checkpoints and Tensorboard summaries.
(default: current working directory).
Default: current working directory.
log_tensorboard
If set, use Tensorboard to log the different parameters. The logs will be located in:
``"{work_dir}/darts_logs/{model_name}/logs/"``.
``"{work_dir}/darts_logs/{model_name}/logs/"``. Default: ``False``.
nr_epochs_val_period
Number of epochs to wait before evaluating the validation loss (if a validation
``TimeSeries`` is passed to the :func:`fit()` method).
``TimeSeries`` is passed to the :func:`fit()` method). Default: ``1``.
torch_device_str
Optionally, a string indicating the torch device to use. By default, ``torch_device_str`` is ``None``
which will run on CPU. Set it to ``"cuda"`` to use all available GPUs or ``"cuda:i"`` to only use
Expand All @@ -232,21 +233,21 @@ def __init__(
https://pytorch-lightning.readthedocs.io/en/stable/advanced/multi_gpu.html#select-gpu-devices
force_reset
If set to ``True``, any previously-existing model with the same name will be reset (all checkpoints will
be discarded).
be discarded). Default: ``False``.
save_checkpoints
Whether or not to automatically save the untrained model and checkpoints from training.
To load the model from checkpoint, call :func:`MyModelClass.load_from_checkpoint()`, where
:class:`MyModelClass` is the :class:`TorchForecastingModel` class that was used (such as :class:`TFTModel`,
:class:`NBEATSModel`, etc.). If set to ``False``, the model can still be manually saved using
:func:`save_model()` and loaded using :func:`load_model()`.
:func:`save_model()` and loaded using :func:`load_model()`. Default: ``False``.
add_encoders
A large number of past and future covariates can be automatically generated with `add_encoders`.
This can be done by adding multiple pre-defined index encoders and/or custom user-made functions that
will be used as index encoders. Additionally, a transformer such as Darts' :class:`Scaler` can be added to
transform the generated covariates. This happens all under one hood and only needs to be specified at
model creation.
Read :meth:`SequentialEncoder <darts.utils.data.encoders.SequentialEncoder>` to find out more about
``add_encoders``. An example showing some of ``add_encoders`` features:
``add_encoders``. Default: ``None``. An example showing some of ``add_encoders`` features:
.. highlight:: python
.. code-block:: python
Expand All @@ -262,14 +263,15 @@ def __init__(
random_state
Control the randomness of the weights initialization. Check this
`link <https://scikit-learn.org/stable/glossary.html#term-random_state>`_ for more details.
Default: ``None``.
pl_trainer_kwargs
By default :class:`TorchForecastingModel` creates a PyTorch Lightning Trainer with several useful presets
that performs the training, validation and prediction processes. These presets include automatic
checkpointing, tensorboard logging, setting the torch device and more.
With ``pl_trainer_kwargs`` you can add additional kwargs to instantiate the PyTorch Lightning trainer
object. Check the `PL Trainer documentation
<https://pytorch-lightning.readthedocs.io/en/stable/common/trainer.html>`_ for more information about the
supported kwargs.
supported kwargs. Default: ``None``.
With parameter ``"callbacks"`` you can add custom or PyTorch-Lightning built-in callbacks to Darts'
:class:`TorchForecastingModel`. Below is an example for adding EarlyStopping to the training process.
The model will stop training early if the validation loss `val_loss` does not improve beyond
Expand Down Expand Up @@ -298,7 +300,7 @@ def __init__(
parameter ``trainer`` in :func:`fit()` and :func:`predict()`.
show_warnings
whether to show warnings raised from PyTorch Lightning. Useful to detect potential issues of
your forecasting use case.
your forecasting use case. Default: ``False``.
"""
super().__init__(**self._extract_torch_model_params(**self.model_params))

Expand Down
32 changes: 17 additions & 15 deletions darts/models/forecasting/nbeats.py
Original file line number Diff line number Diff line change
Expand Up @@ -514,22 +514,23 @@ def __init__(
This parameter will be ignored for probabilistic models if the ``likelihood`` parameter is specified.
Default: ``torch.nn.MSELoss()``.
likelihood
The likelihood model to be used for probabilistic forecasts.
One of Darts' :meth:`Likelihood <darts.utils.likelihood_models.Likelihood>` models to be used for
probabilistic forecasts. Default: ``None``.
optimizer_cls
The PyTorch optimizer class to be used (default: ``torch.optim.Adam``).
The PyTorch optimizer class to be used. Default: ``torch.optim.Adam``.
optimizer_kwargs
Optionally, some keyword arguments for the PyTorch optimizer (e.g., ``{'lr': 1e-3}``
for specifying a learning rate). Otherwise the default values of the selected ``optimizer_cls``
will be used.
will be used. Default: ``None``.
lr_scheduler_cls
Optionally, the PyTorch learning rate scheduler class to be used. Specifying ``None`` corresponds
to using a constant learning rate.
to using a constant learning rate. Default: ``None``.
lr_scheduler_kwargs
Optionally, some keyword arguments for the PyTorch learning rate scheduler.
Optionally, some keyword arguments for the PyTorch learning rate scheduler. Default: ``None``.
batch_size
Number of time series (input and output sequences) used in each training pass.
Number of time series (input and output sequences) used in each training pass. Default: ``32``.
n_epochs
Number of epochs over which to train the model.
Number of epochs over which to train the model. Default: ``100``.
model_name
Name of the model. Used for creating checkpoints and saving tensorboard data. If not specified,
defaults to the following string ``"YYYY-mm-dd_HH:MM:SS_torch_model_run_PID"``, where the initial part
Expand All @@ -538,13 +539,13 @@ def __init__(
``"2021-06-14_09:53:32_torch_model_run_44607"``.
work_dir
Path of the working directory, where to save checkpoints and Tensorboard summaries.
(default: current working directory).
Default: current working directory.
log_tensorboard
If set, use Tensorboard to log the different parameters. The logs will be located in:
``"{work_dir}/darts_logs/{model_name}/logs/"``.
``"{work_dir}/darts_logs/{model_name}/logs/"``. Default: ``False``.
nr_epochs_val_period
Number of epochs to wait before evaluating the validation loss (if a validation
``TimeSeries`` is passed to the :func:`fit()` method).
``TimeSeries`` is passed to the :func:`fit()` method). Default: ``1``.
torch_device_str
Optionally, a string indicating the torch device to use. By default, ``torch_device_str`` is ``None``
which will run on CPU. Set it to ``"cuda"`` to use all available GPUs or ``"cuda:i"`` to only use
Expand All @@ -565,21 +566,21 @@ def __init__(
https://pytorch-lightning.readthedocs.io/en/stable/advanced/multi_gpu.html#select-gpu-devices
force_reset
If set to ``True``, any previously-existing model with the same name will be reset (all checkpoints will
be discarded).
be discarded). Default: ``False``.
save_checkpoints
Whether or not to automatically save the untrained model and checkpoints from training.
To load the model from checkpoint, call :func:`MyModelClass.load_from_checkpoint()`, where
:class:`MyModelClass` is the :class:`TorchForecastingModel` class that was used (such as :class:`TFTModel`,
:class:`NBEATSModel`, etc.). If set to ``False``, the model can still be manually saved using
:func:`save_model()` and loaded using :func:`load_model()`.
:func:`save_model()` and loaded using :func:`load_model()`. Default: ``False``.
add_encoders
A large number of past and future covariates can be automatically generated with `add_encoders`.
This can be done by adding multiple pre-defined index encoders and/or custom user-made functions that
will be used as index encoders. Additionally, a transformer such as Darts' :class:`Scaler` can be added to
transform the generated covariates. This happens all under one hood and only needs to be specified at
model creation.
Read :meth:`SequentialEncoder <darts.utils.data.encoders.SequentialEncoder>` to find out more about
``add_encoders``. An example showing some of ``add_encoders`` features:
``add_encoders``. Default: ``None``. An example showing some of ``add_encoders`` features:
.. highlight:: python
.. code-block:: python
Expand All @@ -595,14 +596,15 @@ def __init__(
random_state
Control the randomness of the weights initialization. Check this
`link <https://scikit-learn.org/stable/glossary.html#term-random_state>`_ for more details.
Default: ``None``.
pl_trainer_kwargs
By default :class:`TorchForecastingModel` creates a PyTorch Lightning Trainer with several useful presets
that performs the training, validation and prediction processes. These presets include automatic
checkpointing, tensorboard logging, setting the torch device and more.
With ``pl_trainer_kwargs`` you can add additional kwargs to instantiate the PyTorch Lightning trainer
object. Check the `PL Trainer documentation
<https://pytorch-lightning.readthedocs.io/en/stable/common/trainer.html>`_ for more information about the
supported kwargs.
supported kwargs. Default: ``None``.
With parameter ``"callbacks"`` you can add custom or PyTorch-Lightning built-in callbacks to Darts'
:class:`TorchForecastingModel`. Below is an example for adding EarlyStopping to the training process.
The model will stop training early if the validation loss `val_loss` does not improve beyond
Expand Down Expand Up @@ -631,7 +633,7 @@ def __init__(
parameter ``trainer`` in :func:`fit()` and :func:`predict()`.
show_warnings
whether to show warnings raised from PyTorch Lightning. Useful to detect potential issues of
your forecasting use case.
your forecasting use case. Default: ``False``.
References
----------
Expand Down
15 changes: 8 additions & 7 deletions darts/models/forecasting/pl_forecasting_module.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,18 +55,19 @@ def __init__(
This parameter will be ignored for probabilistic models if the ``likelihood`` parameter is specified.
Default: ``torch.nn.MSELoss()``.
likelihood
The likelihood model to be used for probabilistic forecasts.
One of Darts' :meth:`Likelihood <darts.utils.likelihood_models.Likelihood>` models to be used for
probabilistic forecasts. Default: ``None``.
optimizer_cls
The PyTorch optimizer class to be used (default: ``torch.optim.Adam``).
The PyTorch optimizer class to be used. Default: ``torch.optim.Adam``.
optimizer_kwargs
Optionally, some keyword arguments for the PyTorch optimizer (e.g., ``{'lr': 1e-3}``
for specifying a learning rate). Otherwise the default values of the selected ``optimizer_cls``
will be used.
will be used. Default: ``None``.
lr_scheduler_cls
Optionally, the PyTorch learning rate scheduler class to be used. Specifying ``None`` corresponds
to using a constant learning rate.
to using a constant learning rate. Default: ``None``.
lr_scheduler_kwargs
Optionally, some keyword arguments for the PyTorch learning rate scheduler.
Optionally, some keyword arguments for the PyTorch learning rate scheduler. Default: ``None``.
"""
super().__init__()

Expand Down Expand Up @@ -117,15 +118,15 @@ def training_step(self, train_batch, batch_idx) -> torch.Tensor:
-1
] # By convention target is always the last element returned by datasets
loss = self._compute_loss(output, target)
self.log("train_loss", loss, batch_size=train_batch[0].shape[0])
self.log("train_loss", loss, batch_size=train_batch[0].shape[0], prog_bar=True)
return loss

def validation_step(self, val_batch, batch_idx) -> torch.Tensor:
"""performs the validation step"""
output = self._produce_train_output(val_batch[:-1])
target = val_batch[-1]
loss = self._compute_loss(output, target)
self.log("val_loss", loss, batch_size=val_batch[0].shape[0])
self.log("val_loss", loss, batch_size=val_batch[0].shape[0], prog_bar=True)
return loss

def predict_step(
Expand Down
Loading

0 comments on commit f326a1a

Please sign in to comment.