Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make Lightning trainer .test() verbosity configurable #1367

Closed
1 task done
w-biggs opened this issue Jul 5, 2023 · 4 comments · Fixed by #1407
Closed
1 task done

Make Lightning trainer .test() verbosity configurable #1367

w-biggs opened this issue Jul 5, 2023 · 4 comments · Fixed by #1407

Comments

@w-biggs
Copy link

w-biggs commented Jul 5, 2023

Prerequisites

Is your feature request related to a problem? Please describe.

When doing cross-validation, my output gets flooded with the Lightning .test() output, which is frustrating. It drowns out the per-trial results, which is what I'm actually interested in, and lengthens the output by 20x.

Describe the solution you'd like

Have some sort of verbosity parameter to .test() which then gets passed into the verbosity={value} parameter of self.trainer.test.

Describe alternatives you've considered

There seems to be no good alternative -- Lightning doesn't use a logger, it just print()s the output based on the verbose flag. What I've ended up doing in the meantime is disabling stdout altogether when running .test(), then re-enabling it before printing out the trial results.

Additional context

N/A

@leoniewgnr
Copy link
Collaborator

Hi @w-biggs, thanks for raising this. Most of our core-developers are on summer break right now, so I'll try to help. Unfortunately, I don't get the problem. Could you elaborate on it?

@w-biggs
Copy link
Author

w-biggs commented Jul 25, 2023

When cross-validating, every time a model is tested, Lightning outputs something like:

       Test metric             DataLoader 0
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
        Loss_test          0.0026712114922702312
         MAE_val            18.709611892700195
        RMSE_val             22.74985122680664
      RegLoss_test                  0.0
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

If I'm doing 100 tuning trials with 5 different cross-validation splits, that means this output gets printed 500 times. If I'm working in a notebook, this becomes super unwieldy, and it drowns out the info I'm actually interested in. Lightning's .test() function, which is what is outputting this, has a verbosity parameter that can be used to disable this output. It would be nice if NeuralProphet's .test() function could also take in a verbosity parameter, which would then be passed to Lightning's .test() function when it's called.

@ziqin8
Copy link
Contributor

ziqin8 commented Aug 24, 2023

The easiest solution to me is to surface the verbose argument in the .test() API.

@ziqin8
Copy link
Contributor

ziqin8 commented Aug 24, 2023

Another verbosity issue I faced is on fit method. So if learning_rate = 'auto', then there's no way one can disable the learning rate finder's progress bar, even when minimal = True or progress=False as those only control the main fitting progress bar.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants