Releases: WenjieDu/PyPOTS
v0.6 Pre-release
v0.5 🔥 New Models & Features
Here is the summary of this new version's changelog:
- the modules of iTransformer, FiLM, and FreTS are included in PyPOTS. The three have been implemented as imputation models in this version;
- CSDI is implemented as a forecasting model;
MultiHeadAttention
is enabled to manipulate all attention operators in PyPOTS;
What's Changed
- Fix failed doc building, fix a bug in gene_random_walk(), and refactor unit testing configs by @WenjieDu in #355
- Implement CSDI as a forecasting model by @WenjieDu in #354
- Update the templates by @WenjieDu in #356
- Implement forecasting CSDI and update the templates by @WenjieDu in #357
- Update README by @WenjieDu in #359
- Update docs by @WenjieDu in #362
- Implement FiLM as an imputation model by @WenjieDu in #369
- Implement FreTS as an imputation model by @WenjieDu in #370
- Implement iTransformer as an imputation model by @WenjieDu in #371
- Add iTransformer, FreTS, FiLM by @WenjieDu in #372
- Fix failed CI testing on macOS with Python 3.7 by @WenjieDu in #373
- Add SaitsEmbedding, fix failed CI on macOS with Python3.7, and update docs by @WenjieDu in #374
- Fix error in gene_random_walk by @WenjieDu in #375
- Try to import torch_geometric only when init Raindrop by @WenjieDu in #381
- Enable all attention operators to work with
MultiHeadAttention
by @WenjieDu in #383 - Fix a bug in gene_random_walk, import pyg only when initing Raindrop, and make MultiHeadAttention work with all attention operators by @WenjieDu in #384
- Refactor code and update docstring by @WenjieDu in #385
- 添加中文版README文件 by @Justin0388 in #386
- Refactor code and update docs by @WenjieDu in #387
New Contributors
- @Justin0388 made their first contribution in #386
We also would like to thank Sijia @phoebeysj, Haitao @CowboyH, and Dewang @aizaizai1989 for their help in polishing Chinese README.
Full Changelog: v0.4.1...v0.5
v0.4.1 🚧 Refactor&Modularization
In this refactoring version, we
- applied SAITS loss function to the newly added imputation models (Crossformer, PatchTST, DLinear, ETSformer, FEDformer, Informer, and Autoformer) in v0.4, and add the arguments
MIT_weight
andORT_weight
in them for users to balance the multi-task learning; - modularized all neural network models and put their modules in the package
pypots.nn.modules
; - removed deprecated metric funcs (e.g.
pypots.utils.metrics.cal_mae
that has been replaced bypypots.utils.metrics.calc_mae
);
What's Changed
- Apply SAITS loss to newly added models and update the docs by @WenjieDu in #346
- Modularize neural network models by @WenjieDu in #348
- Modularize NN models, remove deprecated metric funcs, and update docs by @WenjieDu in #349
- Remove
pypots.imputation.locf.modules
and add assertions for BTTF by @WenjieDu in #350 - Test building package during CI by @WenjieDu in #353
- Avoid the import error
MessagePassing not defined
by @WenjieDu in #351
Full Changelog: v0.4...v0.4.1
v0.4 🔥 New models
- applied the SAITS embedding strategy to models Crossformer, PatchTST, DLinear, ETSformer, FEDformer, Informer, and Autoformer to make them applicable to POTS data as imputation methods;
- fixed a bug in USGAN loss function;
- gathered several Transformer embedding methods into the package
pypots.nn.modules.transformer.embedding
; - added the attribute
best_epoch
for NN models to record the best epoch num and log it after model training; - made the self-attention operator replaceable in the class
MultiHeadAttention
for Transformer models; - renamed the argument
d_inner
of all models in previous versions intod_ffn
. This is for unified argument naming and easier understanding; - removed deprecated functions
save_model()
andload_model()
in all NN model classes, which are now replaced bysave()
andload()
;
What's Changed
- Removing deprecated functions by @WenjieDu in #318
- Add Autoformer as an imputation model by @WenjieDu in #320
- Removing deprecated save_model and load_model, adding the imputation model Autoformer by @WenjieDu in #321
- Simplify MultiHeadAttention by @WenjieDu in #322
- Add PatchTST as an imputation model by @WenjieDu in #323
- Renaming d_inner into d_ffn by @WenjieDu in #325
- Adding PatchTST, renaming d_innner into d_ffn, and refactoring Autofomer by @WenjieDu in #326
- Add DLinear as an imputation model by @WenjieDu in #327
- Add ETSformer as an imputation model by @WenjieDu in #328
- Add Crossformer as an imputation model by @WenjieDu in #329
- Add FEDformer as an imputation model by @WenjieDu in #330
- Add Crossformer, Autoformer, PatchTST, DLinear, ETSformer, FEDformer as imputation models by @WenjieDu in #331
- Refactor embedding package, remove the unused part in Autoformer, and update the docs by @WenjieDu in #332
- Make the self-attention operator replaceable in Transformer by @WenjieDu in #334
- Add informer as an imputation model by @WenjieDu in #335
- Speed up testing procedure by @WenjieDu in #336
- Add Informer, speed up CI testing, and make self-attention operator replaceable by @WenjieDu in #337
- debug USGAN by @AugustJW in #339
- Fix USGAN loss function, and update the docs by @WenjieDu in #340
- Add the attribute
best_epoch
to record the best epoch num by @WenjieDu in #342 - Apply SAITS embedding strategy to new added models by @WenjieDu in #343
- Release v0.4, apply SAITS embedding strategy to the newly added models, and update README by @WenjieDu in #344
Full Changelog: v0.3.2...v0.4
v0.3.2 🐞 Bugfix
- fixed an issue that stopped us from running Raindrop on multiple CUDA devices;
- added Mean and Median as naive imputation methods;
What's Changed
- Refactor LOCF, fix Raindrop on multiple cuda devices, and update docs by @WenjieDu in #308
- Remind how to display the figs rather than invoking plt.show() by @WenjieDu in #310
- Update the docs and requirements by @WenjieDu in #311
- Fixing some bugs, updating the docs and requirements by @WenjieDu in #312
- Make CI workflows only test with Python v3.7 and v3.11 by @WenjieDu in #313
- Update the docs and release v0.3.2 by @WenjieDu in #314
- Add mean and median as imputation methods, and update docs by @WenjieDu in #317
Full Changelog: v0.3.1...v0.3.2
v0.3.1
A bug in the calculation of the delta matrix (time-decay matrix) discussed in #294 gets fixed in this update.
What's Changed
- Update logo URLs by @WenjieDu in #293
- Fixing the issue in delta calculation by @WenjieDu in #297
- Fixing the issue in time-decay matrix calculation and simplify the code by @WenjieDu in #298
- Roll back the delta calculation of M-RNN to the same with GRU-D by @WenjieDu in #300
Full Changelog: v0.3...v0.3.1
v0.3 coming with new features 😎
Happy New Year, dear friends! 🥳
New features and updated APIs in PyPOTS are brought to you here! In v0.3, we
- added TimesNet as an imputation model;
- simplified the structure of
val_set
. In previous versions, you had to giveindicating_mask
in the dictionaryval_set
that tells PyPOTS to use which values to validate the model. Now you only need to giveX_ori
(i.e.X_intact
before) andX
, both leaving their missing data as NaNs. PyPOTS will handle everything left to evaluate the model for you; - enabled PyPOTS to tune hyperparameters for external models (implemented with the PyPOTS framework but haven't been integrated into PyPOTS);
- updated the package
pypots.data.saving
. Separated the functions for pickle saving and h5py saving, and addedload_dict_from_h5
that can inverse (deserialize) the process ofsave_dict_into_h5
; - fixed some bugs (#255, #263, #266, #280, #282, #286, #289);
What's Changed
- Code refactor by @WenjieDu in #251
- Adding TimesNet as an imputation model by @WenjieDu in #252
- Adding TimesNet, refactoring code, and updating docs by @WenjieDu in #253
- Fixing CSDI gtmask bug by @WenjieDu in #255
- Fixing CSDI
gt_mask
issue, and setting a fixed random seed for testing cases by @WenjieDu in #256 - Making CSDI return all n_sampling_times imputation samples by @WenjieDu in #258
- Adding get_random_seed(), and adding func calc_quantile_crps() by @WenjieDu in #260
- Making CSDI val process same as the original by @WenjieDu in #262
- Fix missing argument attn_dropout in imputation Transformer by @WenjieDu in #263
- Adding visualization functions by @AugustJW in #267
- Add cluster plotting functions in pypots.utils.visualization by @vemuribv in #182
- Fixing unstable nonstationary norm, adding
utils.visual
, and doing some code refactoring by @WenjieDu in #266 - Updating package
pypots.data.saving
by @WenjieDu in #268 - Enabling to tune hyperparameters for outside models implemented with PyPOTS framework by @WenjieDu in #269
- Simplifying the structure of val_set, and using a consistent strategy when lazy-loading val_set by @WenjieDu in #272
- Renaming X_intact into X_ori, and adding matplotlib as a dependency by @WenjieDu in #274
- Simplifying val_set, renaming X_intact, and adding unit tests for the visual package by @WenjieDu in #275
- Update GP-VAE by @WenjieDu in #277
- Updating GP-VAE, adding load_dict_from_h5, etc. by @WenjieDu in #278
- Adding _check_inputs() for error calculation functions by @WenjieDu in #279
- Fixing CSDI, adding placeholder for epoch num in logging by @WenjieDu in #280
- Fixing the infinite loop in LOCF by @WenjieDu in #282
- Update docs by @WenjieDu in #285
- Updating docs, fixing CSDI&LOCF&MRNN, and adding the strategy to save all models by @WenjieDu in #284
- Making PyPOTS able to save all models during training, checking if d_model=n_heads*d_k for SAITS and Transformer by @WenjieDu in #287
- Fixing MRNN by @WenjieDu in #286
- Fix issues in MRNN and update the hyperparameter tuning functionality by @WenjieDu in #288
- Fixing the type error of random_seed in pypots.cli.tuning and updating the docs by @WenjieDu in #289
- Updating load_dict_from_h5() by @WenjieDu in #290
Full Changelog: v0.2.1...v0.3
v0.2.1
Here are updates,
- for missing values after LOCF imputation (that are missing since the first step hence LOCF doesn't work), we added more options to handle them. Please refer to the argument
first_step_imputation
in LOCF docs. The default option is "zero" in previous versions, but we've changed it to "backward" which is more reasonable; - enabled SAITS to return latent attention weights from blocks in predict() for advanced analysis e.g. in #178;
- renamed model saving and loading functions save_model() and load_model() into save() and load();
What's Changed
- Check if X_intact contains missing data for imputation models, check and list mismatched hyperparameters in the tuning mode by @WenjieDu in #234
- Make SAITS return attention weights in predict() by @WenjieDu in #239
- Adding other options for the first step imputation in LOCF by @WenjieDu in #240
- Fixing the problem about staling issues by @WenjieDu in #244
- Testing with Python 3.11 and support it by @WenjieDu in #246
- Rename save_model() and load_model() into save() and load() by @WenjieDu in #247
- Refactoring save_model() and load_model(), and updating docs by @WenjieDu in #249
Full Changelog: v0.2...v0.2.1
PyPOTS v0.2 🤗
In PyPOTS v0.2 this new version, we
- enabled hyperparameter tuning for all NN algorithms;
- fixed a bug in the updating strategy of term
F
in CRLI; - replaced the license GPL-v3 with BSD-3-Clause that has less constraints;
- announced PyPOTS ecosystem;
What's Changed
- Adding tsdb and pygrinder into the docs by @WenjieDu in #212
- Fixing disappeared TSDB and PyGrinder by @WenjieDu in #213
- Replacing PyCorruptor with PyGrinder by @WenjieDu in #215
- Merge the docs of PyPOTS ecosystem, and replace pycorruptor with pygrinder in pypots by @WenjieDu in #216
- Clone TSDB and PyGrinder repos to use their latest code and docs by @WenjieDu in #217
- Install from source code to use the latest docs of TSDB and PyGrinder by @WenjieDu in #219
- Install from TSDB and PyGrinder repos to use their latest docs by @WenjieDu in #220
- Enable hyperparameter tuning with NNI framework by @WenjieDu in #221
- Fixing dependency error in testing_CI workflow by @WenjieDu in #223
- Enable hyperparameter tuning with NNI, fix dependency error in testing_CI, and update docs by @WenjieDu in #224
- Fix the bug in the updating strategy of term
F
in CRLI by @WenjieDu in #226 - Apply BSD-3 license, and update docs by @WenjieDu in #228
- Fix a bug in CRLI, switch to BSD-3 license by @WenjieDu in #229
- Update version limitations on dependencies, and install dependencies in PyPI publishing workflow by @WenjieDu in #230
Full Changelog: v0.1.4...v0.2
v0.1.4
In this new version, we made the following changes:
- added the imputation model CSDI;
- added the unified method
predict()
for all models to run inference on the given test set; - enabled clustering algorithms to select the best model on the validation set;
- fixed the bug that GP-VAE failed to run on CUDA devices;
- made SAITS to use customized loss function specified by users;
What's Changed
- Add the method predict() for all models by @WenjieDu in #199
- Refactor algorithms' module structure, enable customized loss function in SAITS, enable GP-VAE to run on CUDA, etc. by @WenjieDu in #201
- Merge dev into main by @WenjieDu in #202
- Make clustering algorithms to select the best model according to the loss on a given validation set by @WenjieDu in #204
- Fixing failed CI testing due to dependency installation error by @WenjieDu in #206
- Adding the model CSDI by @WenjieDu in #208
- Refactoring, and updating the docs by @WenjieDu in #209
- Adding CSDI, updating the docs by @WenjieDu in #210
Full Changelog: v0.1.3...v0.1.4