Skip to content

Commit 398a232

Browse files
committed
lint fine-tuning docs
1 parent b64a49b commit 398a232

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

docs/guide/training-techniques/fine_tuning.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -80,6 +80,6 @@ There are a number of considerations and changes you may want to make to trainin
8080

8181
Key differences to training from scratch are:
8282

83-
- **Decrease the learning rate**: It is typically best to use a lower learning rate for fine-tuning a pre-trained model, compared to the optimal LR for from-scratch training.
83+
- **Decrease the learning rate**: It is typically best to use a lower learning rate for fine-tuning a pre-trained model, compared to the optimal LR for from-scratch training.
8484
- **Update energy shifts**: As discussed above, you will likely want to update the atomic energy shifts of the model to match the settings (and thus absolute energies) of your data, to ensure smooth fine-tuning.
85-
- **Fixed model hyperparameters**: When fine-tuning, the architecture of the pre-trained model (number of layers _l_-max, radial cutoff etc. – e.g. provided on [nequip.net](https://www.nequip.net/)) cannot be modified. When comparing the performance of fine-tuning and from-scratch training, it is advised to use the same model hyperparameters for appropriate comparison.
85+
- **Fixed model hyperparameters**: When fine-tuning, the architecture of the pre-trained model (number of layers _l_-max, radial cutoff etc. – e.g. provided on [nequip.net](https://www.nequip.net/)) cannot be modified. When comparing the performance of fine-tuning and from-scratch training, it is advised to use the same model hyperparameters for appropriate comparison.

0 commit comments

Comments
 (0)