-
Notifications
You must be signed in to change notification settings - Fork 197
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Solved a problem similar to Exception: Reached maximum number of idle transformation calls #130
base: master
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Large diffs are not rendered by default.
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -31,10 +31,12 @@ class PyTorchEstimator(Estimator): | |
@validated() | ||
def __init__( | ||
self, trainer: Trainer, lead_time: int = 0, dtype: np.dtype = np.float32 | ||
,**kwargs, | ||
) -> None: | ||
super().__init__(lead_time=lead_time) | ||
self.trainer = trainer | ||
self.dtype = dtype | ||
self.max_idle_transforms = kwargs["max_idle_transforms"] if "max_idle_transforms" in kwargs else None | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is by no means wrong but it seems to me that newer versions of gluonts seem to handle this using the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. ok yes if you can peek into the 0.7.0 branch, you can also see i have merged the implementation of There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thanks a lot for pointing me to the 0.7.0 branch, really good to know you're activelty working on this. Will have a more thorough look. I realize you're now using the pytorch-lightning trainer (was entertaining doing that). |
||
|
||
def create_transformation(self) -> Transformation: | ||
""" | ||
|
@@ -104,7 +106,7 @@ def train_model( | |
|
||
input_names = get_module_forward_input_names(trained_net) | ||
|
||
with env._let(max_idle_transforms=maybe_len(training_data) or 0): | ||
with env._let(max_idle_transforms=self.max_idle_transforms or maybe_len(training_data) or 0): | ||
training_instance_splitter = self.create_instance_splitter("training") | ||
training_iter_dataset = TransformedIterableDataset( | ||
dataset=training_data, | ||
|
@@ -128,7 +130,7 @@ def train_model( | |
|
||
validation_data_loader = None | ||
if validation_data is not None: | ||
with env._let(max_idle_transforms=maybe_len(validation_data) or 0): | ||
with env._let(max_idle_transforms=self.max_idle_transforms or maybe_len(validation_data) or 0): | ||
validation_instance_splitter = self.create_instance_splitter("validation") | ||
validation_iter_dataset = TransformedIterableDataset( | ||
dataset=validation_data, | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The notebook seems to run fine.