Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Solved a problem similar to Exception: Reached maximum number of idle transformation calls #130

Open
wants to merge 3 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
527 changes: 29 additions & 498 deletions examples/Time-Grad-Electricity.ipynb

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion pts/model/deepar/deepar_estimator.py
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@ def create_transformation(self) -> Transformation:
AsNumpyArray(
field=FieldName.FEAT_STATIC_CAT,
expected_ndim=1,
dtype=np.long,
dtype=np.int_,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
dtype=np.int_,
dtype=int,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@stathius ok let me check... do we need to change the notebook?

The notebook seems to run fine.

),
AsNumpyArray(
field=FieldName.FEAT_STATIC_REAL,
Expand Down
6 changes: 4 additions & 2 deletions pts/model/estimator.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,10 +31,12 @@ class PyTorchEstimator(Estimator):
@validated()
def __init__(
self, trainer: Trainer, lead_time: int = 0, dtype: np.dtype = np.float32
,**kwargs,
) -> None:
super().__init__(lead_time=lead_time)
self.trainer = trainer
self.dtype = dtype
self.max_idle_transforms = kwargs["max_idle_transforms"] if "max_idle_transforms" in kwargs else None
Copy link

@stathius stathius Mar 30, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is by no means wrong but it seems to me that newer versions of gluonts seem to handle this using the env variable. If so might be better to stick with it for better compatibility. @kashif

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok yes if you can peek into the 0.7.0 branch, you can also see i have merged the implementation of deepAR and deepVAR as they differ in the output side and the vanilla transformer also works for both univariate and multivariate...

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for pointing me to the 0.7.0 branch, really good to know you're activelty working on this. Will have a more thorough look. I realize you're now using the pytorch-lightning trainer (was entertaining doing that).


def create_transformation(self) -> Transformation:
"""
Expand Down Expand Up @@ -104,7 +106,7 @@ def train_model(

input_names = get_module_forward_input_names(trained_net)

with env._let(max_idle_transforms=maybe_len(training_data) or 0):
with env._let(max_idle_transforms=self.max_idle_transforms or maybe_len(training_data) or 0):
training_instance_splitter = self.create_instance_splitter("training")
training_iter_dataset = TransformedIterableDataset(
dataset=training_data,
Expand All @@ -128,7 +130,7 @@ def train_model(

validation_data_loader = None
if validation_data is not None:
with env._let(max_idle_transforms=maybe_len(validation_data) or 0):
with env._let(max_idle_transforms=self.max_idle_transforms or maybe_len(validation_data) or 0):
validation_instance_splitter = self.create_instance_splitter("validation")
validation_iter_dataset = TransformedIterableDataset(
dataset=validation_data,
Expand Down
1 change: 0 additions & 1 deletion pts/model/time_grad/time_grad_estimator.py
Original file line number Diff line number Diff line change
Expand Up @@ -251,7 +251,6 @@ def create_predictor(
input_names=input_names,
prediction_net=prediction_network,
batch_size=self.trainer.batch_size,
freq=self.freq,
prediction_length=self.prediction_length,
device=device,
)