-
Notifications
You must be signed in to change notification settings - Fork 192
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reproducibility issue in TimeGrad with ver-0.7.0 #152
Comments
I applied DEISMultistepScheduler following the #145,
|
Here's my parameters, the loss gets to 0.277 but the paper does not use loss for its evaluation, they use CPRS-Sum and I get scheduler = DEISMultistepScheduler( num_train_timesteps=150, beta_end=0.1, )
estimator = TimeGradEstimator(
input_size=int(dataset.metadata.feat_static_cat[0].cardinality),
hidden_size=64,
num_layers=2,
dropout_rate=0.1,
lags_seq=[1],
scheduler=scheduler,
num_inference_steps=150,
prediction_length=dataset.metadata.prediction_length,
context_length=dataset.metadata.prediction_length,
freq=dataset.metadata.freq,
scaling="mean",
trainer_kwargs=dict(max_epochs=200, accelerator="gpu", devices="1"),
) |
Your result looks completely fine. Thank you for sharing. I will try to reproduce it. |
Hi, I would like to ask if you know how to set the right hyperparameters of TimeGrad on the solar and Wikipedia datasets to get results consistent with the paper. |
@ProRedCat If we use DEISMultistepScheduler, does it mean that we are using a little bit advanced version of ScoreGrad: Multivariate Probabilistic Time Series Forecasting with Continuous Energy-based Generative Models? |
sorry folks i am traveling this week... I will try to have a look next week |
I installed pytorch-ts by git clone and branched to
ver-0.7.0
.I debugged all the issues in using timegrad model on electricity dataset, resolving differences in using diffusers instead self-implemented diffusion models.
However, the train loss (or validation loss too) does not record around 0.07 reported at the timegrad-electricity example. I get 0.2 mininum even with known hyperparameter setting. Even if I tune the hyperparameters extensively, I get almost similar result (increased the number of training steps (diffusion steps), tuned learing rate, and so on).
I assume there are some issues in adapting diffusers library currently.
Can you update the timegrad-electricity example with
ver-0.7.0
?The text was updated successfully, but these errors were encountered: