-
Notifications
You must be signed in to change notification settings - Fork 320
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use C bindings in MPI example #81
base: master
Are you sure you want to change the base?
Conversation
Thanks for this submission! We're currently integrating a preexisting, independent patch that does the migration, so will try your fixes on top of this once merged upstream. |
Did your tests cover build and run of the |
I followed the examples documentation: And run with the mpiexec -n # ./exMPI03 [run.mac] |
I have tested exMPI01 and exMPI04 and they both worked fine. I have also tested a simulation of mine, with a minir edit to the Parser. This way I could pass the macro directly to the main simulation rather than the MPI interface. It would still load correctly, but I can also define certain command line parameters that are quite useful for batched mode, like scoring, thread number per node, detector construction parameters and so on. |
I tried
|
I am sorry, I have mismatched with the parallel discussions ongoing on gitlab which @bmorgan mentioned above. I was testing the MR in the gitlab and not yours ! |
Sorry my bad. I tested exMPI01 and exMPI02. 03 and 04 need a partial rewrite in the runMerger to use MPI_INT instead of MPI::INT. After that, they do run. About the examples, what is the expected output? EDIT: I realise what I did. The manager recognises the file to be a macro only if it starts with macro. I should edit it so that it includes any .mac files |
After this last commit, the macros get parsed even outside the macro folder. I have attached the terminal outputs run_exMPI01.txt |
Thanks for the updates, these fix the hanging and the non-MPI and MPI runs at least with 2 instances run. There are a few compiler warnings in G4mpi related to shadowing and unused variables with GCC 11 and 13. Could you fix these please? |
examples/extended/parallel/MPI/source/include/G4MPIscorerMerger.hh
Outdated
Show resolved
Hide resolved
Co-authored-by: Ben Morgan <[email protected]>
examples/extended/parallel/MPI/source/include/G4MPIscorerMerger.hh
Outdated
Show resolved
Hide resolved
examples/extended/parallel/MPI/source/include/G4MPIscorerMerger.hh
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have tested the provided_ return condition from MPI_Init_thread and it seems necessary. Please let me know what you think
…e tested against 10.2.0
Co-authored-by: Ben Morgan <[email protected]>
Great! I'll get started on getting this tested upstream and reconciled with the other private patch. |
This pull request simply rewrites the MPI interface to use C bindings instead of C++ bindings. This is because C++ bindings have been deprecated since OpenMPI version 3 and require explicit flags to compile; moreover these have been completely removed from version 5 onwards.
This simple rewrite allows for MPI usage on any modern system. It has been tested on Geant4 11.2.1 and Geant4 11.3.0 using both OpenMPI 4.1.4 and 5.0.2. Tests have been done on a laptop and on a SLURM cluster.
The parser of the manager has been slightly rewritten as well to be able to parse command line arguments into the program. This is especially useful to set any number of threads per node.
The default behaviour would be to create one MPI node ("virtual") per thread to allow the use of any number of threads in each node.