-
Notifications
You must be signed in to change notification settings - Fork 3
Request for Feedback on Parallel I/O Performance with PnetCDF-Python on HPC Cluster #63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hello, |
I was able to run the test program. Thanks. Since I do not have an access to a Panasas, if you can test a few changes |
Hi, I tried that, and it didn't change the time taken. |
I have a few suggestions.
|
I tested the first 2 points,
|
Thanks. These timing results show MPI I/O does not take effect as it supposes to. I wonder if you can run coll_perf.c with more MPI processes and on more compute nodes. |
FYI. There are few MPI-IO hints for Panasas. |
Thanks, I am in contact with the system admins. I will let you know if there are any updates. |
write_out.py.txt
Please let us know what file system you are using (is it a parallel file system?).
If you can edit your short program to add the main function so we can reproduce
the performance number, that will be helpful.
The text was updated successfully, but these errors were encountered: