Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support training with -f or -t argument #1340

Merged
merged 8 commits into from
Sep 27, 2023

Conversation

njzjz
Copy link
Member

@njzjz njzjz commented Sep 21, 2023

Fix #1122.

Fix deepmodeling#1122.

Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
@codecov
Copy link

codecov bot commented Sep 21, 2023

Codecov Report

Attention: 3 lines in your changes are missing coverage. Please review.

Comparison is base (9b44711) 48.54% compared to head (3ee219b) 48.64%.

Additional details and impacted files
@@            Coverage Diff             @@
##            devel    #1340      +/-   ##
==========================================
+ Coverage   48.54%   48.64%   +0.09%     
==========================================
  Files          82       82              
  Lines       14659    14681      +22     
==========================================
+ Hits         7116     7141      +25     
+ Misses       7543     7540       -3     
Files Coverage Δ
dpgen/generator/arginfo.py 100.00% <100.00%> (ø)
dpgen/generator/run.py 67.75% <88.00%> (+0.33%) ⬆️

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@njzjz
Copy link
Member Author

njzjz commented Sep 21, 2023

@wanghan-iapcm Discussion is required: whether these flags should only work in iteration 0 so that they can work with training_reuse_iter without modifying input files.

Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
@njzjz njzjz linked an issue Sep 21, 2023 that may be closed by this pull request
@wanghan-iapcm
Copy link
Contributor

@wanghan-iapcm Discussion is required: whether these flags should only work in iteration 0 so that they can work with training_reuse_iter without modifying input files.

Yes, they should only work for iteration 0. Actually the finetune does two things: 1. statistic the training data and set the new energy bias in the model, 2. train the new model with the --init-model mode.

You may want to check the implementation in dpgen2 deepmodeling/dpgen2#152

@njzjz njzjz marked this pull request as ready for review September 26, 2023 01:40
@wanghan-iapcm wanghan-iapcm merged commit 4a32867 into deepmodeling:devel Sep 27, 2023
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Finetune pretrained DPA-1 model with dpgen
2 participants