Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Learning Rate scheduler #237

Closed
boynukaline opened this issue Apr 23, 2019 · 1 comment
Closed

Learning Rate scheduler #237

boynukaline opened this issue Apr 23, 2019 · 1 comment

Comments

@boynukaline
Copy link

boynukaline commented Apr 23, 2019

Scheduler (reduce lr at epochs 218, 245, i.e. batches 400k, 450k)

# lf = lambda x: 1 - x / epochs  # linear ramp to zero
# lf = lambda x: 10 ** (-2 * x / epochs)  # exp ramp to lr0 * 1e-2
lf = lambda x: 1 - 10 ** (hyp['lrf'] * (1 - x / epochs))  # inv exp ramp to lr0 * 1e-2
scheduler = optim.lr_scheduler.LambdaLR(optimizer, lr_lambda=lf, last_epoch=start_epoch - 1)

Which epochs does the lr reduce other than 218 and 245? Does it stop at 245? Are we able to tune this for a custom dataset? What to do when the loss stagnates?

@glenn-jocher
Copy link
Member

See #238 for all LR scheduling concerns.

The original darknet LR scheduler steps at 400k and 450k when training COCO to 500200 batches.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants