Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GLOW: Training for mnist with more levels does not work #7

Open
nikste opened this issue Apr 11, 2020 · 0 comments
Open

GLOW: Training for mnist with more levels does not work #7

nikste opened this issue Apr 11, 2020 · 0 comments

Comments

@nikste
Copy link

nikste commented Apr 11, 2020

--n_levels 5 results in:

Traceback (most recent call last):
  File "/home/nsteenbergen/workspace-python/glow-pytorch/normalizing_flows/glow.py", line 744, in <module>
    data_dependent_init(model, args)
  File "/home/nsteenbergen/miniconda3/envs/torch/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 49, in decorate_no_grad
    return func(*args, **kwargs)
  File "/home/nsteenbergen/workspace-python/glow-pytorch/normalizing_flows/glow.py", line 469, in data_dependent_init
    model(next(iter(dataloader))[0].requires_grad_(True if args.checkpoint_grads else False).to(args.device))
  File "/home/nsteenbergen/miniconda3/envs/torch/lib/python3.8/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/nsteenbergen/miniconda3/envs/torch/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 150, in forward
    return self.module(*inputs[0], **kwargs[0])
  File "/home/nsteenbergen/miniconda3/envs/torch/lib/python3.8/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/nsteenbergen/workspace-python/glow-pytorch/normalizing_flows/glow.py", line 422, in forward
    x = self.squeeze(x)
  File "/home/nsteenbergen/miniconda3/envs/torch/lib/python3.8/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/nsteenbergen/workspace-python/glow-pytorch/normalizing_flows/glow.py", line 252, in forward
    x = x.reshape(B, C, H//2, 2, W//2, 2)   # factor spatial dim
RuntimeError: shape '[256, 96, 0, 2, 0, 2]' is invalid for input of size 24576

--n_levels 4

Traceback (most recent call last):
  File "/home/nsteenbergen/workspace-python/glow-pytorch/normalizing_flows/glow.py", line 745, in <module>
    train_and_evaluate(model, train_dataloader, test_dataloader, optimizer, writer, args)
  File "/home/nsteenbergen/workspace-python/glow-pytorch/normalizing_flows/glow.py", line 562, in train_and_evaluate
    train_epoch(model, train_dataloader, optimizer, writer, epoch, args)
  File "/home/nsteenbergen/workspace-python/glow-pytorch/normalizing_flows/glow.py", line 515, in train_epoch
    samples = generate(model, n_samples=4, z_stds=[0., 0.25, 0.7, 1.0])
  File "/home/nsteenbergen/miniconda3/envs/torch/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 49, in decorate_no_grad
    return func(*args, **kwargs)
  File "/home/nsteenbergen/workspace-python/glow-pytorch/normalizing_flows/glow.py", line 551, in generate
    sample, _ = model.inverse(batch_size=n_samples, z_std=z_std)
  File "/home/nsteenbergen/workspace-python/glow-pytorch/normalizing_flows/glow.py", line 437, in inverse
    z, sum_logdets = self.gaussianize.inverse(torch.zeros_like(zs[-1]), zs[-1])
  File "/home/nsteenbergen/workspace-python/glow-pytorch/normalizing_flows/glow.py", line 309, in inverse
    h = self.net(x1) * self.log_scale_factor.exp()
  File "/home/nsteenbergen/miniconda3/envs/torch/lib/python3.8/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/nsteenbergen/miniconda3/envs/torch/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 345, in forward
    return self.conv2d_forward(input, self.weight)
  File "/home/nsteenbergen/miniconda3/envs/torch/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 341, in conv2d_forward
    return F.conv2d(input, weight, self.bias, self.stride,
RuntimeError: Expected 4-dimensional input for 4-dimensional weight 384 192 3 3, but got 2-dimensional input of size [4, 192] instead
@nikste nikste changed the title Training for mnist with more levels does not work GLOW: Training for mnist with more levels does not work Apr 11, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant