Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[new] Add a training trick #131

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open

[new] Add a training trick #131

wants to merge 2 commits into from

Conversation

LindaCY
Copy link

@LindaCY LindaCY commented Jan 19, 2019

Description:
Add a training trick: Halve the learning rate if the performance on metrics not improving for [halve_lr_epochs] epochs, and then restart training by loading the previous best model.

"halve_lr_epochs" denotes the epochs of which performance on metrics not improving. Default: -1 (never use it).

For example, we can use "halve_lr_epochs" as follows:
trainer = Trainer(model=model, n_epochs=100, optimizer=Adam(lr=0.01),validate_every=10, train_data=train_data, dev_data=dev_data, loss=CrossEntropyLoss(), metrics=AccuracyMetric(),use_tqdm=True, halve_lr_epochs=3 )

Main reason: Many empirical experiments have shown that this kind of trick can improve the efficiency of training and make the final performance of the model better.

Checklist 检查下面各项是否完成

Please feel free to remove inapplicable items for your PR.

  • The PR title starts with [$CATEGORY] (例如[bugfix]修复bug,[new]添加新功能,[test]修改测试,[rm]删除旧代码)
  • Changes are complete (i.e. I finished coding on this PR) 修改完成才提PR
  • All changes have test coverage 修改的部分顺利通过测试。对于fastnlp/fastnlp/的修改,测试代码必须提供在fastnlp/test/
  • Code is well-documented 注释写好,API文档会从注释中抽取
  • To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change 修改导致例子或tutorial有变化,请找核心开发人员

Changes: 逐项描述修改的内容

  • 在trainer.py的Trainer类中添加了一个属性 halve_lr_epochs
  • 在trainer.py中添加了一个训练trick相关代码,即如果在metric上的分数不再提升[halve_lr_epochs]个epoch,那么就减半学习率。然后从之前性能最好的模型开始重新训练。

Mention: 找人review你的PR

@修改过这个文件的人
@核心开发人员

Add a training trick:
Halve the learning rate if the performance on metrics not improving for [halve_lr_epochs] epochs, and then restart training by loading the previous best model.
@codecov-io
Copy link

codecov-io commented Jan 19, 2019

Codecov Report

Merging #131 into master will increase coverage by 6.6%.
The diff coverage is 77.5%.

Impacted file tree graph

@@           Coverage Diff            @@
##           master    #131     +/-   ##
========================================
+ Coverage    67.9%   74.5%   +6.6%     
========================================
  Files          90      88      -2     
  Lines        6306    7265    +959     
========================================
+ Hits         4282    5413   +1131     
+ Misses       2024    1852    -172
Impacted Files Coverage Δ
fastNLP/io/config_io.py 83.22% <ø> (+0.64%) ⬆️
fastNLP/core/instance.py 92.85% <ø> (ø) ⬆️
fastNLP/io/base_loader.py 57.57% <ø> (+3.03%) ⬆️
fastNLP/api/examples.py 0% <0%> (ø) ⬆️
fastNLP/core/utils.py 61.51% <100%> (+1.37%) ⬆️
test/models/test_bert.py 100% <100%> (ø)
test/io/test_dataset_loader.py 100% <100%> (ø) ⬆️
test/api/test_processor.py 100% <100%> (ø) ⬆️
fastNLP/io/embed_loader.py 57.81% <100%> (+2.07%) ⬆️
test/core/test_callbacks.py 100% <100%> (ø) ⬆️
... and 40 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 0378545...fa300ce. Read the comment docs.

@xpqiu xpqiu requested review from yhcc and FengZiYjun January 20, 2019 13:16
@FengZiYjun
Copy link
Contributor

Sorry for late reply.
This trick could be implemented elegantly with the use of callbacks, without hard coding it into the trainer. I am considering merging this PR and move the changes to a new callback.
Thanks for your contribution.

@LindaCY
Copy link
Author

LindaCY commented Jan 25, 2019

@FengZiYjun Thanks for your review~

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants