Skip to content

Commit c38609e

Browse files
authored
Update README.md (#802)
1 parent f785c7b commit c38609e

File tree

1 file changed

+4
-3
lines changed

1 file changed

+4
-3
lines changed

README.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -157,9 +157,10 @@ They are organized below.
157157

158158
* GPU compatible on NVIDIA ([P/V/A/H]100, GH200, etc.) and AMD (MI[1/2/3]00+) GPU and APU hardware
159159
* Ideal weak scaling to 100% of the largest GPU and superchip supercomputers
160-
* \>10K NVIDIA GPUs on [OLCF Summit](https://www.olcf.ornl.gov/summit/) (NV V100-based)
161-
* \>66K AMD GPUs (MI250X) on the first exascale computer, [OLCF Frontier](https://www.olcf.ornl.gov/frontier/) (AMD MI250X-based)
162-
* \>3K AMD APUs on [LLNL Tuolumne](https://hpc.llnl.gov/hardware/compute-platforms/tuolumne) (El Capitan family, AMD MI300A-based)
160+
* \>36K AMD APUs (MI300A) on [LLNL El Capitan](https://hpc.llnl.gov/hardware/compute-platforms/el-capitan)
161+
* \>3K AMD APUs (MI300A) on [LLNL Tuolumne](https://hpc.llnl.gov/hardware/compute-platforms/tuolumne)
162+
* \>33K AMD GPUs (MI250X) on the first exascale computer, [OLCF Frontier](https://www.olcf.ornl.gov/frontier/)
163+
* \>10K NVIDIA GPUs (V100) on [OLCF Summit](https://www.olcf.ornl.gov/summit/)
163164
* Near compute roofline behavior
164165
* RDMA (remote data memory access; GPU-GPU direct communication) via GPU-aware MPI on NVIDIA (CUDA-aware MPI) and AMD GPU systems
165166
* Optional single-precision computation and storage

0 commit comments

Comments
 (0)