-
Notifications
You must be signed in to change notification settings - Fork 254
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Compute graph wrong and one question #87
Comments
I haven't seen the errors in 1 before and I am not sure about what happened. self.fake_B_random = self.netG(self.real_A_encoded, self.z_random) fake_B_random is also conditional on real_A_encoded. The confusion might be caused by the naming. See #31 for more details. |
I was able to reproduce your error 1 now. It didn't happen for the previous PyTorch version. I fixed it with the latest commit. |
Yes. You are correct. I updated the code with a new commit. |
'''
../BycycleGAN/models/bicycle_gan_model.py, line 188, in backward_G_alone
self.loss_z_L1.backword()
RuntimeError: one of the variables needed for gradient compution has been modified by an inpalace operation:[torch.cuda.FloatTensor [258,8]],which is output 0 pf TBackward, is at version 2; expected version 1 instaed.
'''
1: When I run the script "train_edages2shoes.sh", I have encountered the error above and it seems that there is something wrong with the calculation diagram you defined. Note: I did not make any changes to the scripts in the Models folder.
2: when set the "self.opt.conditional_D " is True, I want to know why you chose to use (self.real_a_encoded, self.fake_b_random) to build fake_data_random instead of (self.real_a_random, self.fake_b_random).
`
# generate fake_B_random
self.fake_B_random = self.netG(self.real_A_encoded, self.z_random)
`
The text was updated successfully, but these errors were encountered: