Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[minor] Add a verbose option to NeuralProphet.test #1407

Merged
merged 2 commits into from
Aug 28, 2023

Conversation

ziqin8
Copy link
Contributor

@ziqin8 ziqin8 commented Aug 24, 2023

🔬 Background

resolves #1367

🔮 Key changes

Add a verbose arg to NeuralProphet.test, surfacing the verbose option on trainer.test

📋 Review Checklist

  • I have performed a self-review of my own code.
  • I have commented my code, added docstrings and data types to function definitions.
  • I have added pytests to check whether my feature / fix works.

Please make sure to follow our best practices in the Contributing guidelines.

@leoniewgnr
Copy link
Collaborator

@c3-ziqin this is really cool! Let me fix the linter ruff warnings before, then we are able to merge that, thanks a lot!

@leoniewgnr leoniewgnr added the status: needs review PR needs to be reviewed by Reviewer(s) label Aug 24, 2023
@leoniewgnr leoniewgnr changed the title Add a verbose option to NeuralProphet.test [minor] Add a verbose option to NeuralProphet.test Aug 24, 2023
@codecov
Copy link

codecov bot commented Aug 24, 2023

Codecov Report

Merging #1407 (11a5e6e) into main (fcc13e9) will not change coverage.
The diff coverage is 100.00%.

@@           Coverage Diff           @@
##             main    #1407   +/-   ##
=======================================
  Coverage   89.87%   89.87%           
=======================================
  Files          38       38           
  Lines        5067     5067           
=======================================
  Hits         4554     4554           
  Misses        513      513           
Files Changed Coverage Δ
neuralprophet/forecaster.py 87.40% <100.00%> (ø)

@leoniewgnr
Copy link
Collaborator

@c3-ziqin shouldn't we set the default to False as it was false before, wasnt it?

@leoniewgnr
Copy link
Collaborator

@c3-ziqin shouldn't we set the default to False as it was false before, wasnt it?

Ah now the default was True, wasnt it? Can you confirm, then I'll go ahead and merge this.

Copy link
Collaborator

@leoniewgnr leoniewgnr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@leoniewgnr leoniewgnr merged commit 66a39bd into ourownstory:main Aug 28, 2023
12 of 13 checks passed
@ziqin8
Copy link
Contributor Author

ziqin8 commented Aug 28, 2023

Yes can confirm the default was True as pytorch_lightning.Trainer.test has verbose=True

@github-actions
Copy link

Model Benchmark

Benchmark Metric main current diff
YosemiteTemps MAE_val 1.34899 1.34899 0.0%
YosemiteTemps RMSE_val 2.00817 2.00817 0.0%
YosemiteTemps Loss_val 0.00078 0.00078 0.0%
YosemiteTemps MAE 1.32133 1.32133 0.0%
YosemiteTemps RMSE 2.13713 2.13713 0.0%
YosemiteTemps Loss 0.00064 0.00064 0.0%
YosemiteTemps time 60.0274 61.96 3.22% ⚠️
PeytonManning MAE_val 0.58162 0.58162 0.0%
PeytonManning RMSE_val 0.72218 0.72218 0.0%
PeytonManning Loss_val 0.01239 0.01239 0.0%
PeytonManning MAE 0.41671 0.41671 0.0%
PeytonManning RMSE 0.55961 0.55961 0.0%
PeytonManning Loss 0.00612 0.00612 0.0%
PeytonManning time 12.3331 12.87 4.35% ⚠️
AirPassengers MAE_val 13.0627 13.0627 0.0%
AirPassengers RMSE_val 15.9453 15.9453 0.0%
AirPassengers Loss_val 0.00131 0.00131 0.0%
AirPassengers MAE 9.88153 9.88153 0.0%
AirPassengers RMSE 11.7354 11.7354 0.0%
AirPassengers Loss 0.00052 0.00052 0.0%
AirPassengers time 5.27007 5.63 6.83% ⚠️
Model training plots

Model Training

PeytonManning

YosemiteTemps

AirPassengers

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status: needs review PR needs to be reviewed by Reviewer(s)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Make Lightning trainer .test() verbosity configurable
2 participants