Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

get bug of memory leaks #15

Open
zuotianyoumeng opened this issue Mar 5, 2018 · 6 comments
Open

get bug of memory leaks #15

zuotianyoumeng opened this issue Mar 5, 2018 · 6 comments

Comments

@zuotianyoumeng
Copy link

hi I get some memory leaks when I run this code, I try to add sess.graph.finalize() to frozen the graph, but it still exists, Can anyone give some advice? thanks.

@zuotianyoumeng
Copy link
Author

my cudnn version is 6.0 , cuda is 8.0

@kil77
Copy link

kil77 commented Jul 31, 2018

hello! I meet the same problem with you and it confused me several days, now i found it`s because in the function extract_batch ,too many data are read into the memory ,I decreased 512 to 128 and 4096 to 256,and the problem solved! hope that can help you

@HJanefer
Copy link

HJanefer commented Oct 8, 2018

hello! I meet the same problem with you and it confused me several days, now i found it`s because in the function extract_batch ,too many data are read into the memory ,I decreased 512 to 128 and 4096 to 256,and the problem solved! hope that can help you

您好,我发现我改了之后还是内存爆了,如果方便的话,能给些建议吗?万分感激。

@kil77
Copy link

kil77 commented Oct 8, 2018

hello! I meet the same problem with you and it confused me several days, now i found it`s because in the function extract_batch ,too many data are read into the memory ,I decreased 512 to 128 and 4096 to 256,and the problem solved! hope that can help you

您好,我发现我改了之后还是内存爆了,如果方便的话,能给些建议吗?万分感激。

你可以分步调试,就会找到出问题的那条语句

@HJanefer
Copy link

HJanefer commented Oct 8, 2018

hello! I meet the same problem with you and it confused me several days, now i found it`s because in the function extract_batch ,too many data are read into the memory ,I decreased 512 to 128 and 4096 to 256,and the problem solved! hope that can help you

您好,我发现我改了之后还是内存爆了,如果方便的话,能给些建议吗?万分感激。

你可以分步调试,就会找到出问题的那条语句

好滴,太谢谢您了!不过我还是没把代码跑通,我把那两个参数的值改得更小了,调试时sess.run loss就卡住了,没有显示报错,也没计算结果,并且也没有跳出运算,不知道是哪里出了问题。

@fuyi02
Copy link

fuyi02 commented Mar 11, 2019

this is the official version of deeplab: https://github.com/tensorflow/models/tree/master/research/deeplab
you‘d best change the code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants