Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add api MultiplicativeDecay #38250

Merged
merged 8 commits into from
Jan 7, 2022
Merged

Add api MultiplicativeDecay #38250

merged 8 commits into from
Jan 7, 2022

Conversation

guguguzi
Copy link
Contributor

PR types

New features

PR changes

APIs

Describe

Add MultiplicativeDecay, refer to torch.optim.lr_scheduler.MultiplicativeLR

@CLAassistant
Copy link

CLAassistant commented Dec 17, 2021

CLA assistant check
All committers have signed the CLA.

@paddle-bot-old
Copy link

Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@guguguzi guguguzi changed the title My cool stuff Add api MultiplicativeDecay Dec 17, 2021
jerrywgz
jerrywgz previously approved these changes Dec 20, 2021
Copy link
Contributor

@zhwesky2010 zhwesky2010 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

python/paddle/fluid/dygraph/learning_rate_scheduler.py 目录下的可以不用实现,是已废弃的API目录,实现python/paddle/optimizer/lr.py中的即可

@paddle-bot-old
Copy link

Sorry to inform you that f9da7da's CIs have passed for more than 7 days. To prevent PR conflicts, you need to re-run all CIs manually.

@guguguzi
Copy link
Contributor Author

guguguzi commented Dec 26, 2021

python/paddle/fluid/dygraph/learning_rate_scheduler.py 目录下的可以不用实现,是已废弃的API目录,实现python/paddle/optimizer/lr.py中的即可

Done, thanks.

zhwesky2010
zhwesky2010 previously approved these changes Dec 28, 2021
@paddle-bot-old
Copy link

paddle-bot-old bot commented Jan 3, 2022

Sorry to inform you that 788dbcd's CIs have passed for more than 7 days. To prevent PR conflicts, you need to re-run all CIs manually.

Ligoml
Ligoml previously approved these changes Jan 4, 2022
scheduler.step() # If you update learning rate each step
# scheduler.step() # If you update learning rate each epoch

# train on static graph mode
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

静态图模式不推荐使用,这里的 example 只放动态图即可~

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, thanks~

@guguguzi guguguzi dismissed stale reviews from Ligoml and zhwesky2010 via 5eaaf4d January 4, 2022 05:06
Ligoml
Ligoml previously approved these changes Jan 4, 2022
Copy link
Contributor

@Ligoml Ligoml left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for docs

Ligoml
Ligoml previously approved these changes Jan 6, 2022
if self.last_epoch > 0:
return self.last_lr * self.lr_lambda(self.last_epoch)
else:
return self.last_lr
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

accroding to description of last_epoch in doc, return self.base_lr will be better?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done thanks~

Copy link
Contributor

@jeff41404 jeff41404 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@XiaoguangHu01 XiaoguangHu01 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LG API

@jeff41404 jeff41404 merged commit 4a3a2d6 into PaddlePaddle:develop Jan 7, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants