-
-
Notifications
You must be signed in to change notification settings - Fork 247
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simultaneous model building and running #280
Comments
We can do this using a GPU based FIFO queue. There was an issue leading to GPU starvation in Tensorflow v.1, they have fixed it using QueueRunner and a coordinator (due to their graph architecture). I believe FIFO will achieve the goal we are trying to reach. |
In some cases, it would be even more beneficial to completely de-couple these two steps. On our system, we can span more cores/nodes on the CPU side through different HPC queues and then only use the (expensive) GPU nodes for the last step of the compute. |
I should have added some details. In our system, for the CPU queues, I can span 64 nodes each having 128 cores (cost factor=0.0023/core). On the GPU side, I am limited to 8 a100s (cost factor=155/GPU). In my ideal case, we would rip through the CPU tasks on the "cheap" nodes and only hit the GPU nodes for the GPU task. |
import gprMax models = [ results = gprMax.runModelsAsync(models) for result in results: |
Hello everyone , I wanted to contribute to the gprMax project , can someone assist me further for it. |
In the case where you have multiple models to simulate and are using the GPU-based solver:
This could improve overall simulation times.
The text was updated successfully, but these errors were encountered: