-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
running train_op took too long ?? #24
Comments
@alexlee-gk , could you please help? I am facing the same issue. |
I am also facing the same issue. It seems like it is only a print in the train.py file line 267. |
Thanks for sharing this great work!
I run into this issue when training ours_savp on kth dataset, the training looks going properly, but is very slow.
My configuration:
tensorflow: 1.10.0
cuda: 9.0
cudnn: 7.3.0.29
I'm running KTH dataset with ours_savp model. When I use default hparms, I got out of memory error, so I change batch_size=8.
My GPU looks working properly:
+-------------------------------+----------------------+----------------------+
| 1 Tesla K40c Off | 00000000:02:00.0 Off | 0 |
| 37% 73C P0 124W / 235W | 10963MiB / 11441MiB | 76% Default |
+-------------------------------+----------------------+----------------------+
Tensorboard refreshes when summery_freq is reahced.
Appreciate for any suggestions.
Regards,
The text was updated successfully, but these errors were encountered: