Skip to content

Releases: Noble-Lab/casanovo

Casanovo v4.2.1

25 Jun 15:47
ac2ddec
Compare
Choose a tag to compare

4.2.1 - 2024-06-25

Fixed

  • Pin NumPy version to below v2.0 to ensure compatibility with current DepthCharge version.

Casanovo v4.2.0

14 May 18:34
1fcec6a
Compare
Choose a tag to compare

4.2.0 - 2024-05-14

Added

  • A deprecation warning will be issued when deprecated config options are used in the config file or in the model weights file.

Changed

  • Config option max_iters has been renamed to cosine_schedule_period_iters to better reflect that it controls the number of iterations for the cosine half period of the learning rate.

Fixed

  • Fix beam search caching failure when multiple beams have an equal predicted peptide score by breaking ties randomly.
  • The mzTab output file now has proper line endings regardless of platform, fixing the extra \r found when run on Windows.

Casanovo v4.1.0

16 Feb 08:34
b7f8ff9
Compare
Choose a tag to compare

4.1.0 - 2024-02-16

Changed

  • Instead of having to specify train_from_scratch in the config file, training will proceed from an existing model weights file if this is given as an argument to casanovo train.

Fixed

  • Fixed beam search decoding error due to non-deterministic selection of beams with equal scores.

Casanovo v4.0.1

25 Dec 09:47
2e8f579
Compare
Choose a tag to compare

4.0.1 - 2023-12-25

Fixed

  • Fix automatic PyPI upload.

Casanovo v4.0.0

23 Dec 12:25
3c2d3f5
Compare
Choose a tag to compare

4.0.0 - 2023-12-22

Added

  • Checkpoints include model parameters, allowing for mismatches with the provided configuration file.
  • accelerator parameter controls the accelerator (CPU, GPU, etc) that is used.
  • devices parameter controls the number of accelerators used.
  • val_check_interval parameter controls the frequency of both validation epochs and model checkpointing during training.
  • train_label_smoothing parameter controls the amount of label smoothing applied when calculating the training loss.

Changed

  • The CLI has been overhauled to use subcommands.
  • Upgraded to Lightning >=2.0.
  • Checkpointing is configured to save the top-k models instead of all.
  • Log steps rather than epochs as units of progress during training.
  • Validation performance metrics are logged (and added to tensorboard) at the validation epoch, and training loss is logged at the end of training epoch, i.e. training and validation metrics are logged asynchronously.
  • Irrelevant warning messages on the console output and in the log file are no longer shown.
  • Nicely format logged warnings.
  • every_n_train_steps has been renamed to val_check_interval in accordance to the corresponding Pytorch Lightning parameter.
  • Training batches are randomly shuffled.
  • Upgraded to Torch >=2.1.

Removed

  • Remove config option for a custom Pytorch Lightning logger.
  • Remove superfluous custom_encoder config option.

Fixed

  • Casanovo runs on CPU and can pass all tests.
  • Correctly refer to input peak files by their full file path.
  • Specifying custom residues to retrain Casanovo is now possible.
  • Upgrade to depthcharge v0.2.3 to fix sinusoidal encoding and for the PeptideTransformerDecoder hotfix.
  • Correctly report amino acid precision and recall during validation.

Casanovo v3.5.0

16 Aug 10:45
Compare
Choose a tag to compare

3.5.0 - 2023-08-16

Fixed

  • Don't try to assign non-existing output writer during eval mode.
  • Specifying custom residues to retrain Casanovo is now possible.

Casanovo v3.4.0

21 Jun 06:32
9134278
Compare
Choose a tag to compare

3.4.0 - 2023-06-19

Added

  • every_n_train_steps parameter now controls the frequency of both validation epochs and model checkpointing during training.

Changed

  • We now log steps rather than epochs as units of progress during training.
  • Validation performance metrics are logged (and added to tensorboard) at the validation epoch, and training loss is logged at the end of training epoch, i.e. training and validation metrics are logged asynchronously.

Fixed

  • Correctly refer to input peak files by their full file path.

Casanovo v3.3.0

04 Apr 16:40
d023fa9
Compare
Choose a tag to compare

3.3.0 - 2023-04-04

Added

  • Included the min_peptide_len parameter in the configuration file to restrict predictions to peptide with a minimum length.
  • Export multiple PSMs per spectrum using the top_match parameter in the configuration file.

Changed

  • Calculate the amino acid scores as the average of the amino acid scores and the peptide score.
  • Spectra from mzML and mzXML peak files are referred to by their scan numbers in the mzTab output instead of their indexes.

Fixed

  • Verify that the final predicted amino acid is the stop token.
  • Spectra are correctly matched to their input peak file when analyzing multiple files simultaneously.
  • The score of the stop token is taken into account when calculating the predicted peptide score.
  • Peptides with incorrect N-terminal modifications (multiple or internal positions) are no longer predicted.

Casanovo v3.2.0

18 Nov 22:08
31a5936
Compare
Choose a tag to compare

3.2.0 - 2022-11-18

Changed

  • Update PyTorch Lightning global seed setting.
  • Use beam search decoding rather than greedy decoding to predict the peptides.

Fixed

  • Don't use model weights with incorrect major version number.

Casanovo v3.1.0

07 Nov 18:37
c2594d4
Compare
Choose a tag to compare

3.1.0 - 2022-11-03

Added

  • Matching model weights are automatically downloaded from GitHub.
  • Automatically calculate testing code coverage.

Changed

  • Maximum supported Python version updated to 3.10.
  • No need to explicitly specify a config file, the default config will be used instead.
  • Initialize Tensorboard during training by passing its directory location.

Fixed

  • Don't use worker threads on Windows and MacOS.
  • Fix for running when no GPU is available.