Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

map doesnt increase after few epochs #1219

Closed
joel5638 opened this issue May 22, 2020 · 7 comments
Closed

map doesnt increase after few epochs #1219

joel5638 opened this issue May 22, 2020 · 7 comments
Labels

Comments

@joel5638
Copy link

Training with custom data in 7 classes.
training set has 23,500 images
validation has 11,000 images.
training for 273 epochs.

after 40 epochs, the map stays on 0.605 and doesn't go up. it stays the same.
Screenshot from 2020-05-22 14-47-25

@joel5638 joel5638 added the bug Something isn't working label May 22, 2020
@glenn-jocher glenn-jocher removed the bug Something isn't working label May 22, 2020
@glenn-jocher
Copy link
Member

What's your question?

@joel5638
Copy link
Author

@glenn-jocher Is it normal? should we wait for all the epochs to be completed though there is no up in Map?!

@glenn-jocher
Copy link
Member

I would 1. update your repo, as 300 epochs is the new default, and 2, train fully with all default settings before analyzing your results.

@glenn-jocher
Copy link
Member

glenn-jocher commented May 22, 2020

@joel5638 remember some hyps, like the cosine scheduler learning rate, are programmed to change over the course of the full training, so they have not run their course if you stop halfway.
#238 (comment)

What you want to look for is overfitting for indications that you should stop. This means that your validation losses increase. So rather than paste some anecdotal screenshot of your last few epochs, it would be much more informative to upload your results.png file, which you can create at any point in training with utils.utils.plot_results(), and then you can see the validation loss trends.

@www7890
Copy link

www7890 commented May 23, 2020

@glenn-jocher Is it normal? should we wait for all the epochs to be completed though there is no up in Map?!

It is totally normal, in usual case, map increases fast for about fifty to a hundred, then happens oscillation and slightly grows up, until it seems to stop growing(actually it still grows, until validation loss of cls, obj, ciou starts increase).

It may cost at least 300 epochs, could be 400-500 epochs. Please observe valid losses. If val.cls increase at the first place, than use smooth classification. (in utils.py, search "eps", set eps from zero to 0.01~0.1)

@github-actions
Copy link

This issue is stale because it has been open 30 days with no activity. Remove Stale label or comment or this will be closed in 5 days.

@glenn-jocher
Copy link
Member

@www7890 thanks for your insights! Your experience aligns with the typical behavior of training YOLOv3. Observing the validation losses is indeed key to understanding the progress of the model. Your explanation will be helpful for other users facing similar questions. If you need further assistance, feel free to ask!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants