New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
After the training total loss stabilized, the training total loss decreased again after the model was resumed #306
Comments
@xl4533 there is a learning rate scheduler updating the LR depending on the epoch and the hyperparameters you set in train.py. If you change the number of training epochs after starting training this may affect your results. |
See #238 |
Thank you very much. I'm trying to adjust these parameters to see performance. |
@xl4533 you're welcome! Feel free to experiment with different hyperparameters and let us know if you have any more questions. Good luck with your adjustments! 🚀 |
I migrated the trained COCO model to a new data set for training, which was normal at the beginning. I set up 3000 epochs, but after hundreds of epochs total loss was almost unchanged. I stopped training at this time, but when I used the trained model again, total loss dropped again. I want to know why? Will the learning rate drop to a very low level after a certain epoch?
The text was updated successfully, but these errors were encountered: