Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simultaneous model building and running #280

Open
craig-warren opened this issue Feb 23, 2021 · 5 comments
Open

Simultaneous model building and running #280

craig-warren opened this issue Feb 23, 2021 · 5 comments

Comments

@craig-warren
Copy link
Member

In the case where you have multiple models to simulate and are using the GPU-based solver:

  • is it possible to run the build phase (which always happens on CPU) of the next model in the series whilst the previous model is still executing on GPU?

This could improve overall simulation times.

@mzmmoazam
Copy link

mzmmoazam commented Apr 16, 2021

We can do this using a GPU based FIFO queue. There was an issue leading to GPU starvation in Tensorflow v.1, they have fixed it using QueueRunner and a coordinator (due to their graph architecture). I believe FIFO will achieve the goal we are trying to reach.
Using this approach we can get a minimum of 35% performance enhancement.

@rsettlage
Copy link

In some cases, it would be even more beneficial to completely de-couple these two steps. On our system, we can span more cores/nodes on the CPU side through different HPC queues and then only use the (expensive) GPU nodes for the last step of the compute.

@rsettlage
Copy link

I should have added some details. In our system, for the CPU queues, I can span 64 nodes each having 128 cores (cost factor=0.0023/core). On the GPU side, I am limited to 8 a100s (cost factor=155/GPU). In my ideal case, we would rip through the CPU tasks on the "cheap" nodes and only hit the GPU nodes for the GPU task.

@Harshsaini001
Copy link

import gprMax

models = [
gprMax.Model(data1),
gprMax.Model(data2),
gprMax.Model(data3),
]

results = gprMax.runModelsAsync(models)

for result in results:
print(result.predictions)
print(result.error)

@Jay-sanjay
Copy link

Hello everyone , I wanted to contribute to the gprMax project , can someone assist me further for it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants