Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Validation loss vs. metrics? #66

Open
cy94 opened this issue Mar 14, 2024 · 0 comments
Open

Validation loss vs. metrics? #66

cy94 opened this issue Mar 14, 2024 · 0 comments

Comments

@cy94
Copy link

cy94 commented Mar 14, 2024

Dear authors, @evelinehong,

Thanks for the interesting paper and for releasing the code and models :) I was able to reproduce the ScanQA results on the validation set. I also added computation of the validation loss, similar to the training step by calling forward in addition to predict_answers, in VQATask.valid_step. However I noticed while the validation loss goes up, the validation metrics also go up.
image

image

image

Did you notice something similar to this while training, or do you have any suggestions as to why this could happen? I would expect validation loss and metrics to be correlated, even if the val loss is not from autoregressive generation.

Best,
Chandan

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant