Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Way to run operations on CPU, thusly RAM instead of GPU/VRAM? #44

Open
mxchinegod opened this issue Jul 18, 2016 · 5 comments
Open

Way to run operations on CPU, thusly RAM instead of GPU/VRAM? #44

mxchinegod opened this issue Jul 18, 2016 · 5 comments

Comments

@mxchinegod
Copy link

I'm limited by the Amazon instance. I can buy a server with a K80 and 12GB of VRAM but I'd also be willing to pay more for slower processing on the cloud if I can get the CPU working.

I'm going to look at the code after this but in the event that the answer is obvious to the community, I wanted to ask before I go digging.

Thanks so much! Hope everyone's results have been fun.

@FabienLavocat
Copy link

Just do not enable CUDNN and the script will run on a CPU. But it will take you hours if not days to get a picture.

@andersbll
Copy link
Owner

The CPU implementation of the convolution operation is too slow for any practical use. You would have to implement something faster first.

@mxchinegod
Copy link
Author

I believe you that it's slow and I appreciate both very quick responses!

Since our needs may be slightly different, by what magnitude is it impractical? We talking days or weeks for 1MP + resolution?

Thanks!

@johndpope
Copy link

@mxchinegod
Copy link
Author

I've used it, the results even with gradient smoothing are not ideal whatsoever. I've done enormous amounts of tweaking and attempted to isolate what I like about it, it just isn't as good. I will perhaps come back to it anyways because it's more fleshed out.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants