-
Notifications
You must be signed in to change notification settings - Fork 331
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possible use of torch.multiprocessing #139
Comments
@djsaunde any progress on this? I'll start looking into it, because I'm working with pretty small networks and GPUs won't give you much of an advantage there. This seems to be the way to speed up in that case. |
@Huizerd nope, just an idea we had some time ago. I'm not sure that it will speed things up, but it might be worth a shot. Let me know if you need any help. |
Check out this branch for a start on the multiprocessing work (I'm pretty sure it fails as-is). It'll need to be fast-forwarded to the current state of the master branch. |
Consider the simulation loop in the
Network.run()
function:Where I've marked a
->
, there might be an opportunity to usetorch.multiprocessing
. Since we do updates at timet
based on network state at timet-1
, allNodes
/Connection
s updates can be performed with a separate process (thread?) at once. Lettingk
= no. of layers,m
= no. of connections, given enough CPU / GPU resources, the loops marked with->
would have time complexityO(1)
instead ofO(k)
,O(m)
in the number of layers and connections, respectively.I think it'd be good to keep around two (?)
multiprocessing.Pool
objects around, one forNodes
objects and another forConnection
objects. Instead of statements of the form:We might rewrite this as something like:
Here,
nodes_pool
is defined as an attribute in theNetwork
constructor. This last bit probably won't work straightaway; we'd need to figure out the right syntax (if it exists).This same idea can also be applied in the
Network
'sreset()
andget_inputs()
functions.The text was updated successfully, but these errors were encountered: