-
-
Notifications
You must be signed in to change notification settings - Fork 98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Send/Sync for Device #888
Comments
I am realizing for the example I described, you wouldn't actually get any parallelism from the addition of threads since the copy (assuming its synchronous) and inference kernels will be interleaved on the same cuda stream. Nevertheless it would be great if something like the following pseudo code was possible
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Currently it is not possible to move devices or tensors across threads.
dfdx/dfdx-core/src/tensor/cuda/device.rs
Lines 22 to 33 in 4476b5e
Despite everything in the Device being wrapped in an Arc, the Cuda device is not actually Send or Sync - almost certainly because some mut* are nested in there for ffi. As a result it is rather burdensome to implement various functionality. As an example: if you want to implement a pipeline where tensors are prepared in thread A, while inference is done in thread B - tensors cant be moved across threads copying to the host and back (serializing to a vec), and device methods cant be called from both threads.
I am not familiar enough with the underlying implementations to understand if the device can implement Sync, but it would be great if at a minimum the device could be sent across threads. As of right now I could certainly just create a device per thread, but I am not sure how much overhead is associated with doing so - or the implication of not sharing the caches across instances.
The text was updated successfully, but these errors were encountered: