diff --git a/01_year-in-review/about-alcf.md b/01_year-in-review/about-alcf.md
new file mode 100644
index 0000000..7ba12d5
--- /dev/null
+++ b/01_year-in-review/about-alcf.md
@@ -0,0 +1,21 @@
+---
+layout: page
+
+theme: white
+permalink: year-in-review/about-alcf
+
+title: About ALCF
+hero-img-source: "TCSBuilding.jpg"
+hero-img-caption: "The ALCF is a national scientific user facility located at Argonne National Laboratory."
+
+aside: about-numbers.md
+---
+
+The Argonne Leadership Computing Facility (ALCF), a U.S. Department of Energy (DOE) Office of Science user facility at Argonne National Laboratory, enables breakthroughs in science and engineering by providing supercomputing and AI resources to the research community.
+
+ALCF computing resources—available to researchers from academia, industry, and government agencies—support large-scale computing projects aimed at solving some of the world’s most complex and challenging scientific problems. Through awards of computing time and support services, the ALCF enables researchers to accelerate the pace of discovery and innovation across a broad range of disciplines.
+
+As a key player in the nation's efforts to provide the most advanced computing resources for science, the ALCF is helping to chart new directions in scientific computing through a convergence of simulation, data science, and AI methods and capabilities.
+
+Supported by the DOE’s Advanced Scientific Computing Research (ASCR) program, the ALCF and its partner organization, the Oak Ridge Leadership Computing Facility, operate leadership-class supercomputing resources that are orders of magnitude more powerful than the systems typically used for open scientific research.
+
diff --git a/01_year-in-review/directors-letter.md b/01_year-in-review/directors-letter.md
new file mode 100644
index 0000000..e355e49
--- /dev/null
+++ b/01_year-in-review/directors-letter.md
@@ -0,0 +1,28 @@
+---
+layout: page
+
+theme: white
+permalink: year-in-review/directors-letter
+
+title: Director’s Letter
+hero-img-source: "papka.png"
+hero-img-caption: "Michael E. Papka, ALCF Director"
+---
+
+The process of planning for and installing a supercomputer takes years. It includes a critical period of stabilizing the system through validation, verification, and scale-up activities, which can vary for each machine. However, unlike ALCF’s previous or current production machines, Aurora’s long ramp-up journey has also included several configuration changes and COVID-related supply chain issues.
+
+Aurora is a highly advanced system designed for various AI and scientific computing applications. It will also be used to train a one-trillion-parameter large language model for scientific research. Aurora’s architecture boasts more endpoints in the interconnect technology than any other system, and it has over 60,000 GPUs, making it the system with the largest number of GPUs in the world.
+
+In 2023, ALCF made significant progress toward realizing Aurora’s full capabilities. In June, Aurora completed the installation of its 10,624th and final blade. Shortly after, Argonne shared the results of benchmarking runs for about half of Aurora to the TOP500. These results were used in the November announcement of the world’s fastest supercomputers, where Aurora secured the second position. Once the full system goes online, its theoretical peak performance is expected to be approximately two exaflops.
+
+Some application teams participating in the DOE’s Exascale Computing Project and the ALCF’s Aurora Early Science Program have begun using Aurora to scale and optimize their applications for the system’s initial science campaigns. Soon to follow will be all the early science teams and an additional 24 INCITE research teams in 2024.
+
+This new exascale machine brings with it some more big changes. Theta, one of ALCF’s production systems, was retired on December 31, 2023. ThetaGPU will be decoupled and reconfigured to become a new system named Sophia, which will be used for AI development and as a production resource for visualization and analysis. Meanwhile, the ALCF AI Testbed will continue to make more production systems available to the research community.
+
+For more than three decades, researchers at Argonne have been developing tools and methods that connect powerful computing resources with large-scale experiments, such as the Advanced Photon Source and the DIII-D National Fusion Facility. Their work is shaping the future of inter-facility workflows by automating them and identifying ways to make these workflows reusable and adaptable for different experiments. Argonne’s Nexus effort, in which ALCF plays a key role, offers the framework for a unified platform to manage high-throughput workflows across the HPC landscape.
+
+In the following pages, you will learn more about how Nexus supports the DOE’s goal of building a broadscale Integrated Research Infrastructure (IRI) that leverages supercomputing facilities for experiment-time data analysis. The IRI will accelerate the next generation of data-intensive research by combining scientific facilities, supercomputing resources, and new data technologies like AI, machine learning, and edge computing.
+
+In 2023, we continued our commitment to education and workforce development by organizing a number of informative learning experiences and training events. As part of this effort, ALCF staff members led a pilot program called “Introduction to High-Performance Computing Bootcamp” in collaboration with other DOE labs. This was an immersive program designed for students in STEM to work on energy justice projects using computational and data science tools learned throughout the week. In a separate effort, the ALCF worked on developing the curriculum for its “Intro to AI-Driven Science on Supercomputers” training course, with the aim of adapting the content to introduce undergraduates and graduates to the basics of large language models for future course offerings.
+
+To conclude, I express my sincere gratitude to the exceptional staff, vendor partners, and program office, who have all contributed to making ALCF one of the leading scientific supercomputing facilities in the world. Each year, we take the time to share our numerous achievements with you in our Annual Report, and while there are many more exciting changes on the horizon, I truly appreciate this opportunity.
diff --git a/01_year-in-review/year-in-review.md b/01_year-in-review/year-in-review.md
new file mode 100644
index 0000000..ee00104
--- /dev/null
+++ b/01_year-in-review/year-in-review.md
@@ -0,0 +1,76 @@
+---
+layout: page
+
+title: ALCF Leadership
+
+theme: white
+permalink: year-in-review/year-in-review
+---
+
+
+
+{% include media-img.html
+ source= "Allcock_1600x900.jpg"
+ caption= "Bill Allcock"
+%}
+
+# Bill Allcock, ALCF Director of Operations
+
+One of the most significant changes of the year was the retirement of Theta, Cooley, and the theta-fs0 storage systems. They were great systems that helped our users accomplish a lot of science. From the operations perspective, there is a silver lining in that it reduces the number of systems and makes our operational environment more uniform without them, but it is still sad to see them go.
+
+We made some significant improvements to our systems over the course of the year.
+- The ALCF AI Testbed’s Graphcore and Groq systems were made available for use and all four publicly available tested systems (Cerebras, SambaNova, Groq, and Graphcore) got significant upgrades.
+- Polaris network hardware was upgraded from Slingshot 10 to Slingshot 11, doubling the max theoretical bandwidth. We are working on system software upgrades that will include the Slingshot software, programming environment, and NVIDIA drivers.
+- The HPSS disk cache was increased from 1PB to 9PB, significantly improving the probability of a “cache hit” and faster data retrieval.
+
+Operationally, we continue to expand our support for DOE's Integrated Research Infrastructure. Much of our initial work was with Argonne’s Advanced Photon Source, and while we continue to work with them, we are also collaborating with other facilities. From the operations side, we are working to make it faster and easier to create new on-demand endpoints. This includes making the endpoints more robust and easier for scientists to manage.
+
+Last, but certainly not least, the Operations team has been decisively engaged in the Aurora bring-up. We have done extensive work to assist in the stabilization efforts. We continue to work on developing software and processes to manage the gargantuan amount of logs and telemetry that the system produces. We have provided support for scheduling. Our system admins have developed extensive prolog and epilogue hooks to detect and, where possible, automatically remediate known issues on the system while the vendors work on a permanent resolution. We have also assisted in supporting the user community. Because of the NDA (Non-Disclosure Agreement) requirements, we set up a special Slack instance to facilitate discussion and have assisted in conducting training.
+
+We continue to collaborate with Altair Engineering and the OpenPBS community. We found some scale-related bugs that were making administration on Aurora slow and difficult. We worked closely with Altair and they provided patch updates very quickly and integrated those fixes into the production releases. We continued our work on porting PBS to the AI Testbed systems, but their unique hardware architectures and constraints have been challenging. However, later in the year, we were forced to table the AI system work and focus on Aurora.
+
+
+{% include media-img.html
+ source= "Kumaran_1600x900.jpg"
+ caption= "Kalyan Kumaran"
+%}
+
+# Kalyan Kumaran, ALCF Director of Technology
+
+Over the past year, we made considerable progress in deploying Aurora, enhancing our AI for Science capabilities, and advancing the development of DOE’s Integrated Research Infrastructure (IRI). On the Aurora front, our team was instrumental in enabling a partial system run that earned the #2 spot on the Top500 List in November. It was also great to see Aurora’s DAOS storage system place #1 on the IO500 production list. We helped get several early science applications up and running on Aurora – some of which have scaled to 2,000 nodes with very promising performance numbers compared to other GPU-powered systems. Our team also made some notable advances with scientific visualizations, demonstrating interactive visualization capabilities using blood flow simulation data generated with the HARVEY code on Aurora hardware and producing animations from HACC cosmology simulations that ran at scale on the system.
+
+We continued to work closely with Intel to improve and scale oneAPI software, bringing many pieces into production. On Aurora, the AI for Science models driving the deployment of AI frameworks (TensorFlow, PyTorch) have achieved an average single GPU performance more than 2x faster than NVIDIA A100, driven by close collaboration between Argonne staff and Intel engineers. Other efforts included using the Argonne-developed chipStar HIP implementation for Intel GPUs to get HIP applications running on Aurora. To help support Aurora users and the broader exascale computing community in the future, we played a role in launching the DAOS Foundation, which is working to advance the use of DAOS for next-generation HPC and AI/ML workloads, and the Unified Acceleration (UXL) Foundation, which was formed to drive an open standard accelerator software ecosystem. ALCF team members also continued to contribute to the development of standards for various programming languages and frameworks, including C++, OpenCL, SYCL, and OpenMP.
+
+In the AI for science realm, we enhanced the capabilities of the ALCF AI Testbed with two new system deployments (Groq, Graphcore) and two system upgrades (Cerebras, SambaNova). With a total of four different accelerators available for open science research, we partnered with the vendors to host a series of ALCF training workshops, as well as a SC23 tutorial, that introduced each system’s hardware and software and helped researchers get started. The team published a paper on performance portability across the three major GPU vendors' architectures at SC23, demonstrating that all three of them are good for AI for science workloads. The Intel GPU on Aurora demonstrated the best performance at the time of the study. Our staff also contributed to the development of MLCommon’s new storage performance benchmark for AI/ML workloads and submitted results using our Polaris supercomputer and Eagle file system, which demonstrated efficient I/O operations for state-of-the-art AI applications at scale. In addition, we deployed a large language model service on Sunspot and demonstrated its capabilities at Intel’s SC23 booth.
+
+Finally, our ongoing efforts to develop IRI tools and capabilities got a boost with Polaris and the launch of Argonne’s Nexus — a coordinated effort that builds on our decades of research to integrate HPC resources with experiments. We currently have workflows from the Advanced Photon Source and the DIII-D National Fusion Facility running on Polaris, as well as workflows prototyped for DOE’s Earth System Grid Federation and Fermilab’s flagship Short Baseline Neutrino Program. Our team also delivered talks to share our IRI research at the Monterey Data Conference, the Smoky Mountains Computational Sciences and Engineering Conference, Confab23, and the DOE booth at SC23. With momentum building for continued advances in our IRI activities, the Aurora deployment, and AI for science, we have a lot to look forward to in 2024.
+
+{% include media-img.html
+ source= "Ramprakash_1600x900.jpg"
+ caption= "Jini Ramprakash"
+%}
+
+# Jini Ramprakash, ALCF Deputy Director
+
+It was a busy year for the ALCF as we continued to make strides in deploying new systems, tools, and capabilities to support HPC- and AI-driven scientific research, while also broadening our outreach efforts to engage with new communities. In the outreach space, we partnered with colleagues at the Exascale Computing Project, NERSC, OLCF, and the Sustainable Horizons Institute to host DOE’s first “Intro to HPC Bootcamp.” With an emphasis on energy justice and workforce development, the event welcomed around 60 college students (many with little to no background in scientific computing) to use HPC for hands-on projects focused on making positive social impacts. It was very gratifying to see how engaged the students were in this immersive, week-long event. The bootcamp is a great addition to our extensive outreach efforts aimed at cultivating the next-generation computing workforce.
+
+Our ongoing efforts to develop an Integrated Research Infrastructure (IRI) also made considerable progress this year. As a member of DOE’s IRI Task Force and IRI Blueprint Activity over the past few years, I’ve had the opportunity to collaborate with colleagues across the national labs to formulate a long-term strategy for integrating computing facilities like the ALCF with data-intensive experimental and observational facilities. In 2023, we released the IRI Architecture Blueprint Activity Report, which lays out a framework for moving ahead with coordinated implementation efforts across DOE. At the same time, the ALCF continued to develop and demonstrate tools and methods to integrate our supercomputers with experimental facilities, such as Argonne’s Advanced Photon Source and the DIII-D. This year, Argonne launched the “Nexus” effort, which brings together all of the lab’s new and ongoing research activities and partnerships in this domain, ensuring they align with DOE’s broader IRI vision.
+
+We also made progress toward launching the Argonne Enterprise Registration System, a new lab-wide registration platform aimed at standardizing data collection and processing for various categories of non-employees, including facility users. In 2023, we defined system requirements and issued a request for proposals for building the platform. Ultimately, the new system will help eliminate redundant data entry, simplify registration processes for both users and staff, and enhance our reporting capabilities.
+
+As a final note on 2023, we kicked off the ALCF-4 project to plan for our next-generation supercomputer, with DOE approving the CD-0 (Critical Decision-0) mission need for the project in April. We also established the leadership team (with myself as the project director and Kevin Harms as technical director) and began conversations with vendors to discuss their technology roadmaps. We look forward to ramping up the ALCF-4 project in 2024.
+
+{% include media-img.html
+ source= "Riley_1600x900.jpg"
+ caption= "Katherine Riley"
+%}
+
+# Katherine Riley, ALCF Director of Science
+
+Year after year, our user community breaks new ground in using HPC and AI for science. From improving climate modeling capabilities to speeding up the discovery of new materials and advancing our understanding of complex cosmological phenomena, the research generated by ALCF users never ceases to amaze me.
+
+In 2023, we supported 18 INCITE projects and 33 ALCC projects (across two ALCC allocation cycles), as well as numerous Director’s Discretionary projects. Many of these projects were among the last to use Theta, which was retired at the end of the year. Over its 6+ year run as our production supercomputer, Theta delivered 202 million node-hours to 636 projects. The system also played a key role in bolstering our facility’s AI and data science capabilities. Theta was a remarkably productive and reliable machine that will be missed by ALCF users and staff alike.
+
+Research projects supported by ALCF computing resources produced 240 publications in 2023. You can read about several of these efforts in the science highlights section of this report, including a University of Illinois Chicago team that identified the exact reaction coordinates for a key protein mechanism for the first time; a team from the University of Dayton Research Institute and Air Force Research Laboratory that shed light on the complex thermal environments encountered by hypersonic vehicles; and an Argonne team that investigated the impact of disruptions in cancer screening caused by the COVID-19 pandemic.
+
+It was also a very exciting year for Aurora as early science teams began using the exascale system for the first time. After years of diligent work to prepare codes for the Aurora’s unique architecture, the teams were able to begin scaling and optimizing their applications on the machine. Their early performance results have been very promising, giving us a glimpse of what will be possible when teams start using the full supercomputer for their research campaigns next year.
diff --git a/02_features/ai-training.md b/02_features/ai-training.md
new file mode 100644
index 0000000..6c81c61
--- /dev/null
+++ b/02_features/ai-training.md
@@ -0,0 +1,52 @@
+---
+layout: page
+theme: dark
+permalink: features/ai-training
+
+title: Intro to AI Training
+hero-mp4-source: "theta.mp4"
+hero-webm-source: "theta.webm"
+hero-img-source: Theta.jpg
+hero-img-caption: "The ALCF's Theta supercomputer supported more than 600 research projects before being retired at the end of 2023."
+intro: "Theta’s retirement marks the end of a productive run of enabling groundbreaking research across diverse fields, including materials discovery, supernova simulations, and AI for science."
+---
+
+
+After more than six years of enabling breakthroughs in scientific computing, the ALCF’s Theta supercomputer was retired at the end of 2023. [Launched in July 2017](https://www.alcf.anl.gov/news/argonnes-theta-supercomputer-goes-online), the machine delivered 202 million compute hours to more than 600 projects, propelling advances in areas ranging from battery research to fusion energy science.
+
+Theta was a pivotal system for science at Argonne and beyond. Not only did Theta deliver on the ALCF’s mission to enable large-scale computational science campaigns, but it was also the supercomputer that continued the ALCF’s transformation into a user facility that supports machine learning and data science methods alongside more traditional modeling and simulation projects.
+
+Theta’s run as an Argonne supercomputer coincided with the emergence of AI as a critical tool for science. The system provided researchers with a platform that could handle a mix of simulation, AI, and data analysis tasks, catalyzing groundbreaking studies across diverse scientific domains.
+
+Around the same time that Theta made its debut in 2017, the facility launched the ALCF Data Science Program (ADSP) to support HPC projects that were employing machine learning and other AI methods to tackle big data challenges. This initiative gave the facility’s data science and learning capabilities a boost while also building up a new community of users.
+
+Theta is succeeded by Polaris and the Aurora exascale system as the lab’s primary supercomputers for open scientific research. Theta’s Intel architecture and its expansion to include NVIDIA GPUs have played a key role in helping the facility and its user community transition to Polaris’s hybrid architecture and Aurora’s cutting-edge Intel exascale hardware. Theta’s MCDRAM mode, for example, helped pave the way to Aurora’s high-bandwidth memory capabilities.
+
+{% include media-img.html
+ source= "ThetaGPU.jpg"
+ caption= "In 2020, the ALCF augmented Theta with the installation of NVIDIA GPUs to support COVID-19 research."
+ credit= "Argonne National Laboratory"
+%}
+
+Funded by the Coronavirus Aid, Relief and Economic Security (CARES) Act in 2020, [the system’s GPU hardware expansion](https://www.alcf.anl.gov/news/argonne-augments-theta-supercomputer-gpus-accelerate-coronavirus-research), known as ThetaGPU, was initially dedicated to COVID-19 research. The GPU-powered component was later made available to all research projects. After Theta’s retirement, the ThetaGPU hardware was repurposed to create a new machine called Sophia for specialized tasks, including a major focus on supporting AI for science.
+
+# Supercomputer, Super Science
+
+Beyond its powerful hardware, the system’s legacy will be the research breakthroughs it enabled over the years. From detailed molecular simulations to massive cosmological models, Theta supported hundreds of computationally intensive research projects that are only possible at a supercomputing facility like the ALCF.
+
+Theta allowed researchers to perform some of the world’s largest simulations of [engines](https://www.alcf.anl.gov/news/argonne-conducts-largest-ever-simulation-flow-inside-internal-combustion-engine) and [supernovae](https://www.alcf.anl.gov/news/largest-collection-3d-supernova-simulations-leads-new-insights-explosion-dynamics). The system powered efforts to [model the spread of COVID-19](https://www.alcf.anl.gov/news/argonne-epidemiological-supercomputing-model-showcases-innovation) and [assess the energy use of the nation’s buildings](https://www.alcf.anl.gov/news/argonne-supercomputing-resources-power-energy-savings-analysis). It enabled AI-driven research to accelerate the search for [new catalysts](https://www.alcf.anl.gov/news/machine-learning-model-speeds-assessing-catalysts-decarbonization-technology-months) and [promising drug candidates](https://www.alcf.anl.gov/news/researchers-leverage-argonne-s-theta-supercomputer-identify-covid-19-targets-and-therapeutics). Theta also gave industry R&D a boost, [helping TAE Technologies](https://www.alcf.anl.gov/news/argonne-and-tae-technologies-heating-plasma-energy-research) inform the design of its fusion energy devices, [advancing 3M’s efforts](https://www.alcf.anl.gov/news/new-machine-learning-simulations-reduce-energy-need-mask-fabrics-other-materials) to improve the energy efficiency of a manufacturing process, and [generating data to aid ComEd](https://www.alcf.anl.gov/news/comed-report-shows-how-science-and-supercomputers-help-utilities-adapt-climate-change) in preparing for the potential impacts of climate change. The list of impactful science projects goes on and on.
+
+One of the pioneering machine learning projects was led by Jacqueline Cole of the University of Cambridge. With support from the ADSP, her team used Theta to speed up the process of [identifying new materials for improved solar cells](https://www.alcf.anl.gov/news/scientists-use-machine-learning-identify-high-performing-solar-materials). It began with an effort to sort through hundreds of thousands of scientific journals to collect data on a wide variety of chemical compounds. The team created an automated workflow that combined simulation, data mining, and machine learning techniques to zero in on the most promising candidates from a pool of nearly 10,000 compounds. This allowed the researchers to pinpoint five high-performing materials for laboratory testing.
+
+
+
+{% include media-video.html
+ embed-code= ''
+ caption= "Researchers from Princeton University are using Polaris for large-scale 3D simulations aimed at advancing our understanding of supernova explosions."
+ credit= "ALCF Visualization and Data Analytics Team; Princeton University"
+%}
+
+
+Simulating supernova explosions is another area of research that benefitted from Theta’s computational muscle. As part of a multi-year project, Adam Burrows of Princeton University used the supercomputer to advance the state of the art in performing [supernova simulations in 3D](https://www.alcf.anl.gov/news/simulating-supernova-explosions-3d). The team’s work on Theta has included carrying out one of the largest collections of 3D supernova simulations and the longest duration full-physics 3D supernova calculation ever performed. With Theta now retired, the Princeton team continues their work to carry out longer and more detailed 3D supernova simulations on Polaris and Aurora.
+
+While Theta will retire from its full-time role at the end of the year, the system will support one last research campaign in 2024 before it’s officially powered down. As part of a collaboration between the DOE-supported LSST Dark Energy Science Collaboration and NASA-supported researchers, a multi-institutional team will use Theta to produce 3 million simulated images for the surveys to be conducted by the Nancy Grace Roman Space Telescope and the Vera C. Rubin Observatory. The team will generate a set of overlapping Roman-Rubin time domain surveys at the individual pixel level. These detailed images will enable the exploration of highly impactful joint science opportunities between the two surveys, especially for dark energy studies.
diff --git a/02_features/alcf-ai-testbed.md b/02_features/alcf-ai-testbed.md
new file mode 100644
index 0000000..38083c5
--- /dev/null
+++ b/02_features/alcf-ai-testbed.md
@@ -0,0 +1,56 @@
+---
+layout: page
+
+theme: dark
+permalink: features/alcf-ai-testbed
+
+title: ALCF Continues to Expand AI Testbed Systems Deployed for Open Science
+hero-img-source: ALCFAITestbed-2023.jpg
+hero-img-caption: "The ALCF AI Testbed's Cerebras, Graphcore, Groq, and SambaNova systems are available to researchers across the world."
+intro: "The ALCF’s testbed of AI accelerators is enabling the research community to advance the use of AI for data-intensive science."
+---
+
+In 2023, the ALCF AI Testbed expanded its offerings to the research community, with the addition of new Graphcore and Groq systems as well as upgraded Cerebras and SambaNova machines.
+
+The testbed is a growing collection of some of the world’s most advanced AI accelerators available for open science. Designed to enable researchers to explore next-generation machine learning applications and workloads to advance AI for science, the systems are also helping the facility to gain a better understanding of how novel AI technologies can be integrated with traditional supercomputing systems powered by CPUs and GPUs.
+
+The testbed’s newest additions give the ALCF user community access to new leading-edge platforms for data-intensive research projects.
+
+- The new Graphcore Bow Pod64 is well-suited for both common and specialized machine learning applications, which will help to facilitate the use of new AI techniques and model types. The Graphcore Bow Pod64 relies on Intelligence Processing Units (IPUs). IPUs are designed to handle the computational demands of AI-driven tasks. These specialized accelerators are equipped with highly efficient memory architectures that include high-bandwidth memory and on-chip memory, and can more easily support specialized software frameworks and libraries necessary for AI workloads.
+- The new GroqRack system brings inference-based solutions that will aid in using trained machine learning models to make predictions or discover patterns in complex data. Based on the Tensor Streaming Processor (TSP) architecture, the GroqChip processor includes advanced vector and matrix mathematical acceleration units, and provides for predictable and repeatable performance.
+- The upgrade to a Cerebras Wafer-Scale Cluster WSE-2 optimizes the ALCF’s existing Cerebras CS-2 system to include two CS-2 engines, enabling near-perfect linear scaling of large language models (LLMs). This capability helps make extreme-scale AI substantially more manageable.
+- The upgrade to a second-generation SambaNova DataScale SN30 system enables a wider range of AI-for-science applications, making massive AI models and datasets more tractable to users. In this system, each accelerator is allocated a terabyte of memory, which is ideal for applications involving LLMs as well as high-resolution imaging data from experimental facilities.
+
+Together, the ALCF AI Testbed systems provide advanced data analysis capabilities that also support DOE's efforts to develop an Integrated Research Infrastructure that seamlessly connects advanced computing resources with data-intensive experiments, such as light sources and fusion experiments, to accelerate the pace of discovery.
+
+{% include media-video2x.html
+ embed-code1= ''
+ caption1= "Venkat Vishwanath Explains that Different Use Cases for AI Inference Workloads"
+ embed-code2= ''
+ caption2= "Arvind Ramanathan Talks About the Use of AI in Science"
+%}
+
+Scientists are leveraging the ALCF AI Testbed systems for a wide range of data-driven research campaigns. The following summaries provide a glimpse of some of the efforts that are benefitting from the AI accelerators’ advanced capabilities.
+
+# Experimental Data Analysis
+Argonne researchers are leveraging multiple ALCF AI Testbed systems to accelerate and scale deep learning models to aid the analysis of X-ray data obtained at Argonne’s Advanced Photon Source (APS). The team is using the ALCF AI Testbed to train models — too large to run on a single GPU — to generate improved 3D images from x-ray data.
+
+They are also exploring the use of the ALCF’s AI platforms for fast-inference applications. Their work has yielded some promising initial results, with various models (PtychoNN, BraggNN, and AutoPhaseNN) showing speedups over traditional supercomputers. ALCF and vendor software teams are collaborating with the APS team to achieve further advances.
+
+# Neural Networks
+Graph neural networks (GNNs) are powerful machine learning tools that can process and learn from data represented as graphs. GNNs are being used for research in several areas, including molecular design, financial data, and social networks. ALCF researchers are working to compare the performance of GNN models across multiple ALCF AI Testbed accelerators. With a focus on inference, the team is examining which GNN-specific operators or kernels, as a result of increasing numbers of parameters or batch sizes, can create computational bottlenecks that affect overall runtime.
+
+# COVID-19 Research
+An Argonne-led team relied on the ALCF AI Testbed when using LLMs to discover SARS-CoV-2 variants. Their workflow leveraged AI accelerators alongside GPU-accelerated systems including the ALCF’s Polaris supercomputer. One of the critical problems the team had to overcome was how to manage extensive genomic sequences, the size of which can overwhelm many computing systems when establishing foundation models. The learning-optimized architecture of the ALCF AI Testbed systems was key for accelerating the training process. The team’s research resulted in the 2022 Gordon Bell Award Special Prize for COVID-19 Research.
+
+
+# Battery Materials
+Argonne scientists are leveraging the ALCF AI Testbed to aid in the development of an application that combines two types of computations for research into potential battery materials: (1) running physics simulations of molecules under redox and (2) training a machine learning model that predicts that energy quantity. The application uses the machine learning model to predict the outcomes of the redox simulations, helping to identify molecules with the desired capacity for energy storage. The ALCF AI Testbed has enabled shortened latency when cycling between the execution of a new calculation that yields additional training data and when that model is used to select the next calculation.
+
+
+
+{% include media-video.html
+ embed-code= ''
+ caption= "The ALCF's Murali Emani leads a hands-on workshop covering use and functionality of the ALCF's AI Testbed."
+%}
+
diff --git a/02_features/aurora.md b/02_features/aurora.md
new file mode 100644
index 0000000..6e7ec8f
--- /dev/null
+++ b/02_features/aurora.md
@@ -0,0 +1,45 @@
+---
+layout: page
+
+theme: dark
+permalink: features/aurora
+
+title: Bringing Aurora Online
+hero-mp4-source: "aurora.mp4"
+hero-webm-source: "aurora.webm"
+hero-img-source: ALCF-Aurora1.jpg
+hero-img-caption: "The Aurora team uses a specialized machine to install the supercomputer's blades."
+intro: "The ALCF made significant progress in deploying its exascale supercomputer in 2023, completing the hardware installation, registering early performance numbers, and supporting early science teams’ initial runs on the system."
+
+aside: alcf-4.md
+---
+
+In June 2023, the [installation of Aurora’s 10,624th and final blade](https://www.alcf.anl.gov/news/argonne-installs-final-components-aurora-supercomputer) marked a major milestone in the efforts to deploy the ALCF’s exascale supercomputer. With the full machine in place and powered on, the Aurora team was able to begin the process of stress-testing, stabilizing, and optimizing the massive system to prepare for acceptance and full deployment in 2024.
+
+Built in partnership with Hewlett Packard Enterprise (HPE), [Aurora](https://www.alcf.anl.gov/aurora) is one of the fastest supercomputers in the world, with a theoretical peak performance of more than two exaflops of computing power. It is also one of the world’s largest supercomputers, occupying 10,000 square feet and weighing 600 tons. The system is powered by 21,248 Intel Xeon CPU Max Series processors and 63,744 Intel Data Center GPU Max Series processors. Notably, Aurora features more GPUs and more network endpoints in its interconnect technology than any system to date. To pave the way for a machine of this scale, Argonne first had to complete some substantial facility upgrades, including adding new data center space, mechanical rooms, and equipment that significantly increased the building’s power and cooling capacity.
+
+
+
+{% include media-video.html embed-code= '' caption= "ALCF's Christine Simpson and Victor Mateevitsi provide a fun behind-the-scenes look at Aurora." credit= "Argonne National Laboratory" %}
+
+As is the case with all DOE leadership supercomputers, Aurora is a first-of-its-kind system equipped with leading-edge technologies that are being deployed at an unprecedented scale. This presents unique challenges in launching leadership-class systems as various hardware and software issues only emerge when approaching full-scale operations. The Aurora team, which includes staff from Argonne, Intel, and HPE, continues work to stabilize the supercomputer, which includes efforts such as optimizing the flow of data between network endpoints.
+
+# Early Performance Numbers
+
+In November, Aurora demonstrated strong [early performance numbers](https://www.alcf.anl.gov/news/argonne-shares-strong-early-performance-numbers-aurora-supercomputer) while still in the stabilization period, underscoring its immense potential for scientific computing.
+
+At the SC23 conference, the supercomputer made its debut on the semi-annual TOP500 List with a partial system run. Using approximately half of the system’s nodes, Aurora achieved 585.34 petaflops, earning the #2 overall spot. In addition, Aurora’s storage system, DAOS, earned the top spot on the IO500 Production List, a semi-annual ranking of HPC storage performance.
+
+{% include media-img.html
+ source= "SC23-Aurora-talk.jpg"
+ caption= "ALCF's Kevin Harms discusses Aurora during a tech talk at Intel's booth at the SC23 conference."
+ credit= "Argonne National Laboratory"
+%}
+
+# Early Science Access
+
+In another significant milestone for the supercomputer, early science teams began using Aurora for the first time in 2023. Several teams from the ALCF’s Aurora Early Science Program (ESP) and DOE’s Exascale Computing Project (ECP) were able to transition their work from the [Sunspot test and development system](https://www.alcf.anl.gov/news/argonne-s-new-sunspot-testbed-provides-ramp-aurora-exascale-supercomputer) to Aurora to start scaling and optimizing their applications for the supercomputer’s initial science campaigns. Their work has included performing scientifically meaningful calculations across a wide range of research areas.
+
+Once the early science period begins, the ECP and ESP teams will use the machine to carry out innovative research campaigns involving simulation, artificial intelligence, and data-intensive workloads in areas ranging from fusion energy science and cosmology to cancer research and aircraft design. In addition to pursuing groundbreaking research, these early users help to further stress test the supercomputer and identify potential bugs that need to be resolved ahead of its deployment.
+
+In 2024, an additional 24 research teams will begin using Aurora to ready their codes for the system via allocation awards from DOE’s [INCITE program](https://www.alcf.anl.gov/news/incite-program-awards-supercomputing-time-75-high-impact-projects).
diff --git a/02_features/nexus-iri.md b/02_features/nexus-iri.md
new file mode 100644
index 0000000..9be09b3
--- /dev/null
+++ b/02_features/nexus-iri.md
@@ -0,0 +1,104 @@
+---
+layout: page
+
+theme: dark
+permalink: features/nexus-iri
+
+title: Integrating Supercomputers with Experiments
+hero-img-source: polaris+aps_1600x900.jpg
+hero-img-caption: "The co-location of the ALCF and APS at Argonne provides an ideal environment for developing and demonstrating capabilities for a broader Integrated Research Infrastructure."
+intro: "With Argonne’s Nexus effort, the ALCF continues to build off its long history of developing tools and capabilities to accelerate data-intensive science via an Integrated Research Infrastructure."
+---
+
+
+When the massive upgrade at Argonne’s Advanced Photon Source (APS) is completed in 2024, experiments at the powerful X-ray light source are expected to generate 100–200 petabytes of scientific data per year. That’s a substantial increase over the approximately 5 petabytes that were being produced annually at the APS before the upgrade. When factoring in DOE’s four other light sources, the facilities are projected to collectively generate an exabyte of data per year in the coming decade.
+
+The growing deluge of scientific data is not unique to light sources. Telescopes, particle accelerators, fusion research facilities, remote sensors, and other scientific instruments also produce large amounts of data. And as their capabilities improve over time, the data generation rates will only continue to grow. The scientific community’s ability to process, analyze, store, and share these massive datasets is critical to gaining insights that will spark new discoveries.
+
+To help scientists manage the ever-increasing amount of scientific data, [Argonne’s Nexus effort](https://www.anl.gov/nexus-connect) is playing a key role in supporting DOE’s vision to build an Integrated Research Infrastructure (IRI). The development of an IRI would accelerate data-intensive science by creating an environment that seamlessly melds large-scale research facilities with DOE’s world-class supercomputing, artificial intelligence (AI), and data resources.
+
+{% include media-img.html
+ source= "Nexus-Infographic.jpg"
+ caption= "Argonne's Nexus effort is working to advance data-intensive science via an Integrated Research Infrastructure that connects experimental facilities, supercomputing resources and data technologies."
+ credit= "Argonne National Laboratory"
+%}
+
+For over three decades, Argonne has been working to develop tools and methods to integrate its powerful computing resources with experiments. The ALCF’s IRI efforts include a number of successful collaborations that demonstrate the efficacy of combining its supercomputers with experiments for near real-time data analysis. [Merging ALCF supercomputers with the APS](https://www.alcf.anl.gov/news/bright-lights-big-data-how-argonne-bringing-supercomputing-and-x-rays-together-scientific) has been a significant focus of the lab’s IRI-related research, but the work has also involved collaborations with facilities ranging from [DIII-D National Fusion Facility](https://www.alcf.anl.gov/news/close-computation-far-away-demand-analysis-fuels-frontier-science) in California to [CERN’s Large Hadron Collider (LHC)](https://www.alcf.anl.gov/news/argonne-team-brings-leadership-computing-cern-s-large-hadron-collider) in Switzerland.
+
+These collaborations have led to the creation of new capabilities for on-demand computing and managing complex workflows, giving the lab valuable experience to support the DOE IRI initiative. Argonne also operates several resources and services that are key to realizing the IRI vision.
+
+- The ALCF's [Polaris](https://www.alcf.anl.gov/polaris) and [Aurora](https://www.alcf.anl.gov/aurora) systems are powerful supercomputers with advanced capabilities for simulation, AI, and data analysis.
+- The [ALCF AI Testbed](https://www.alcf.anl.gov/alcf-ai-testbed) provides researchers with access to novel AI accelerators for data-intensive tasks and AI workloads, including training, inference, large language models, and computer vision models.
+- The [ALCF Community Data Co-Op (ACDC)](https://www.acdc.alcf.anl.gov) provides large-scale data storage capabilities, offering a portal that makes it easy to share data with external collaborators across the globe.
+- [Globus](https://www.globus.org), a research automation platform created by researchers at Argonne and the University of Chicago, is a not-for-profit service used to manage high-speed data transfers, computing workflows, data collection, and other tasks for experiments.
+
+
+# Streamlining Science
+
+The IRI will not only enable experiments to analyze vast amounts of data, but it will also allow them to process large datasets quickly for rapid results. This is crucial as experiment-time analysis often plays a key role in shaping subsequent experiments.
+
+{% include media-img.html
+ source= "SC23-Nexus-talk.jpg"
+ caption= "Rachana Ananthakrishnan, Globus executive director (left), and Tom Uram, ALCF IRI lead (right), give a talk on Nexus at the DOE booth at the SC23 Conference."
+ credit= "Argonne National Laboratory"
+%}
+
+For the Argonne-DIII-D collaboration, researchers demonstrated how the close integration of ALCF supercomputers could benefit a fast-paced experimental setup. Their work centered on a fusion experiment that used a series of plasma pulses, or shots, to study the behavior of plasmas under controlled conditions. The shots were occurring every 20 minutes, but the data analysis required more than 20 minutes using their local computing resources, so the results were not available in time to inform the ensuing shot. DIII-D teamed up with the ALCF to explore how they could leverage supercomputers to speed up the analysis process.
+
+To help DIII-D researchers obtain results on a between-pulse timescale, the ALCF team automated and shifted the analysis step to ALCF systems, which computed the analysis of every single pulse and returned the results to the research team in a fraction of the time required by the computing resources locally available at DIII-D. Not only did the DIII-D team get the results in time to calibrate the next shot, they also got 16x higher resolution analyses that helped improve the accuracy of their experimental configuration.
+
+Many APS experiments, including battery research, the exploration of materials failure, and drug development, also need data analyzed in near real-time so scientists can modify their experiments as they are running. By getting immediate analysis results, researchers can use the insights to steer an experiment and zoom in on a particular area to see critical processes, such as the molecular changes that occur during a battery’s charge and discharge cycles, as they are happening.
+
+
+
+{% include media-video.html
+ embed-code= ''
+ caption= "Argonne's Nicholas Schwarz discusses how the integration of Aurora and the upgraded APS will transform science."
+ credit= "Argonne National Laboratory"
+%}
+
+A fully realized IRI would also impact the people conducting the research. Scientists must often devote considerable time and effort to managing data when running an experiment. This includes tasks like storing, transferring, validating, and sharing data before it can be used to gain new insights. The IRI seeks to automate many of these tedious data management tasks so researchers can focus more on the science. This would help streamline the scientific process by freeing up scientists to form hypotheses while experiments are being carried out.
+
+# Supercomputing on Demand
+
+Getting instant access to DOE supercomputers for data analysis requires a shift in how the computing facilities operate. Each facility has established policies and processes for gaining access to machines, setting up user accounts, managing data and other tasks. If a researcher is set up at one computing facility but needs to use supercomputers at the other facilities, they would have to go through a similar set of steps again for each site.
+
+Once a project is set up, researchers submit their jobs to a queue, where they wait their turn to run on the supercomputer. While the traditional queuing system helps optimize supercomputer usage at the facilities, it does not support the rapid turnaround times needed for the IRI.
+
+To make things easy for the end users, the IRI will require implementing a uniform way for experimental teams to gain quick access to the DOE supercomputing resources.
+
+To that end, Argonne has developed and demonstrated methods for overcoming both the user account and job scheduling challenges. The co-location of the APS and the ALCF on the Argonne campus has offered an ideal environment for testing and demonstrating such capabilities. When the ALCF launched the Polaris supercomputer in 2022, four of the system’s racks were dedicated to advancing the integration efforts with experimental facilities.
+
+{% include media-img.html
+ source= "ALCF-Polaris.jpg"
+ caption= "The ALCF’s Polaris supercomputer is supporting research to advance the development of an Integrated Research Infrastructure."
+ credit= "Argonne National Laboratory"
+%}
+
+In the case of user accounts, the existing process can get unwieldy for experiments involving several team members who need to use the computing facilities for data processing. Because many experiments have a team of people collecting data and running analysis jobs, it is important to devise a method that supports the experiment independent of who is operating the instruments on a particular day. In response to this challenge, the Argonne team has piloted the idea of employing “service accounts” that provide secure access to a particular experiment instead of requiring each team member to have an active account.
+
+To address the job scheduling issue, the Argonne team has set aside a portion of Polaris nodes to run with “on-demand” and “preemptable” queues. This approach allows time-sensitive jobs to run on the dedicated nodes immediately.
+
+Using data collected during an APS experiment, the team was able to complete their first fully automated end-to-end test of the service accounts and preemptable queues on Polaris with no humans in the loop. While work continues to enable these capabilities at more and more beamlines, this effort points to a future where the integration of the upgraded APS and the ALCF's Aurora exascale supercomputer will transform science at Argonne and beyond.
+
+# Bringing It All Together
+
+While Argonne and its fellow national labs have been working on projects to demonstrate the promise of an integrated research paradigm for the past several years, DOE’s Advanced Scientific Computing Research (ASCR) program made it a more formal initiative in 2020 with the creation of the IRI Task Force. Comprised of members from several national labs, including Argonne’s Corey Adams, Jini Ramprakash, Nicholas Schwarz, and Tom Uram, the task force identified the opportunities, risks, and challenges posed by such an integration.
+
+{% include media-img.html
+ source= "IRI-blueprint-report.png"
+ caption= "DOE's IRI Architecture Blueprint Activity Report provides the conceptual foundations to move forward with coordinated DOE implementation efforts."
+%}
+
+ASCR recently launched the IRI Blueprint Activity to create a framework for implementing the IRI. In 2023, the blueprint team, which included Ramprakash and Schwarz, released the [IRI Archictecture Blueprint Activity Report](https://www.osti.gov/biblio/1984466), which describes a path forward from the lab’s individual partnerships and demonstrations to a broader long-term strategy that will work across the DOE ecosystem. Over the past year, the blueprint activities have started to formalize with the introduction of IRI testbed resources and environments. Now in place at each of the DOE computing facilities, the testbeds facilitate research to explore and refine IRI ideas in collaboration with teams from DOE experimental facilities.
+
+{% include media-download.html
+ thumbsource= "IRI-blueprint-report.png"
+ filesource= "https://www.osti.gov/biblio/1984466"
+ title= "IRI Architecture Blueprint Activity Report"
+ credit= "Department of Energy"
+ filetype= "PDF"
+ filesize= "1.2kb"
+%}
+
+With the launch of Argonne’s Nexus effort, the lab will continue to leverage its expertise and resources to help DOE and the larger scientific community enable and scale this new paradigm across a diverse range of research areas, scientific instruments, and user facilities.
diff --git a/02_features/performancehighlights.md b/02_features/performancehighlights.md
new file mode 100644
index 0000000..63cdba0
--- /dev/null
+++ b/02_features/performancehighlights.md
@@ -0,0 +1,43 @@
+---
+layout: page
+
+title: Aurora Performance Highlights
+intro: "Now that Aurora is fully assembled, ECP and ESP team members are beginning to transition their work to the supercomputer to ready their applications for full system runs. Here are some early performance results on Aurora."
+teaser-img-source: openmc.PNG
+
+theme: dark
+permalink: features/aurora-performance-highlights
+---
+
+
+
+
+
+
Page not found :(
+The requested page could not be found.
++ {{ include.field }} | + + {% if sdl contains 's' and sdl contains 'd' and sdl contains 'l' %} + Simulation, Data, Learning + {% elsif sdl contains 's' and sdl contains 'd' %} + Simulation, Data + {% elsif sdl contains 's' and sdl contains 'l' %} + ; Simulation, Learning + {% elsif sdl contains 'd' and sdl contains 'l' %} + Data, Learning + {% elsif sdl contains 's' %} + Simulation + {% elsif sdl contains 'd' %} + Data + {% elsif sdl contains 'l' %} + Learning + {% endif %} +
+ +{{ include.blurb }}
\ No newline at end of file diff --git a/_includes/txt-meta.html b/_includes/txt-meta.html new file mode 100644 index 0000000..4d42b3b --- /dev/null +++ b/_includes/txt-meta.html @@ -0,0 +1,69 @@ + + + diff --git a/_includes/txt-pubs.html b/_includes/txt-pubs.html new file mode 100644 index 0000000..9c13f91 --- /dev/null +++ b/_includes/txt-pubs.html @@ -0,0 +1,45 @@ + + +Page not found :(
+The requested page could not be found.
+Argonne’s Leadership Computing Facility Division operates the Argonne Leadership Computing Facility (ALCF) as part of the U.S. Department of Energy’s effort to provide leadership-class computing resources to the scientific community. The ALCF is supported by the DOE Office of Science, Advanced Scientific Computing Research (ASCR) program.
+ +Argonne is a U.S. Department of Energy Laboratory managed by UChicago Argonne, LLC, under contract DE-AC02-06CH11357. The Laboratory’s main facility is outside of Chicago, at 9700 South Cass Avenue, Lemont, Illinois 60439. For information about Argonne and its pioneering science and technology programs, visit www.anl.gov.
+ + +As a leader in the HPC community, the ALCF is actively involved in efforts to broaden the impact of supercomputers and AI for science. The facility also leads and contributes to several activities designed to inspire the next generation of researchers in HPC and the computing sciences.
+ +It is the workforce of the ALCF, and other centers like it, that propel significant achievements in the field of HPC. Computer scientists and other specialists do the work to modify the open-source compilers and libraries used by the scientific community, help scientists get their codes to work on the machines, and develop services that allow researchers to effectively use the machines and move large datasets around.
+ +Having an inclusive and diverse workforce is key to enhancing creativity and productivity in the HPC domain. For the ALCF, this involves collaborating with colleagues from the HPC community to develop strategies for attracting people from underrepresented groups, organizing student camps to inspire young people to pursue STEM careers, and participating in outreach programs that promote diversity in the HPC field.
+ +Argonne was highly involved in the DOE Exascale Computing Project’s Broadening Participation Initiative. This collaborative, multi-lab effort worked to establish a sustainable plan to recruit and retain a diverse HPC workforce by creating a supportive and inclusive culture within the computing sciences at DOE national laboratories. The initiative involved three complementary thrusts, or focus areas, that bolster such efforts in the HPC community.
+ +ALCF led the Intro to HPC thrust, which focused on developing training materials to educate newcomers to HPC. Many undergraduate institutions lack comprehensive training in HPC. As a solution, the Intro to HPC group collaborated with DOE lab communities to identify important HPC topics and create educational materials that effectively convey them. With these materials, the group hosted the pilot Intro to HPC Bootcamp, a weeklong training program that allows undergraduate and graduate students to work on energy justice projects while learning about the fundamentals of HPC such as parallel computing, job scheduling, and data analysis techniques. For a recap of the 2023 program, read the article on our website.
+ +The ALCF team also contributed to the two other Broadening Participation thrusts: the HPC Workforce Development and Retention Action Group, and Sustainable Research Pathways for HPC. The action group shares best practices and develops recommendations and strategies for improving the workforce pipeline through webinars and other outreach materials. Sustainable Research Pathways is an internship and mentoring program that pairs students with ECP teams at different institutions, including ALCF, to work on a variety of projects across application development, software technologies, and computing facilities.
+ + + +The ALCF is also working to strengthen the workforce pipeline by leading and contributing to various educational and outreach programs for students, including summer camps and internship opportunities. These efforts help to introduce students to Argonne scientists and the mission-driven, high-impact research conducted at the lab and beyond, giving them a glimpse of what a career in STEM looks like.
+ +The facility helps put on several Argonne computing camps for middle school and high school students each year, including the CodeGirls@Argonne Camp. Aimed at teaching the basics of coding, the camp allows sixth and seventh-grade students to try out creative and computational thinking through hands-on activities with Argonne mentors. ALCF staff members also regularly contribute to Argonne’s annual Introduce a Girl to Engineering Day and Science Careers in Search of Women events. These events connect Argonne scientists with young women to introduce them research at the lab as well as potential STEM career paths.
+ +Another effort includes ACT-SO (Afro-Academic, Cultural, Technological & Scientific Olympics) High School Research Program, where ALCF staff members mentor local high school students participating in the regional DuPage County ACT-SO competition. With a mission to support the development of a diverse, talented workforce, the program pairs students with Argonne mentors for research projects that use the lab’s facilities and resources.
+ +At the college level, the ALCF is reaching the next generation of AI practitioners through its “Intro to AI-driven Science on Supercomputers” training series. Aimed at undergraduate and graduate students, the series teaches attendees the fundamentals of using AI and supercomputers for scientific research.
+ +The ALCF maintains a presence at many computing conferences and events to communicate career opportunities and recruit new team members. The facility continues to have a presence at the annual Grace Hopper Celebration, an event that brings the research and career interests of women in computing to the forefront. ALCF staff also regularly attend the Richard Tapia Celebration of Diversity in Computing Conference, an annual event that seeks to bring together undergraduate and graduate students, faculty, researchers, and professionals in computing from all backgrounds and ethnicities to strengthen diversity in computing.
+ +To provide a local resource and network for women interested in HPC, the ALCF collaborated with the University of Illinois Chicago to found a Women in High Performance Computing (WHPC) chapter, called Chicago WHPC. The chapter aims to increase the participation of women in the HPC field around Chicago, provide resources for women in HPC careers, and mentor students considering professional career paths in computing.
+ +Together, these activities and initiatives are helping the ALCF to establish a diverse, inclusive work environment and talent pipeline that will fuel future innovations in HPC and the computing sciences.
+ + +Every summer, the ALCF opens its doors to a new class of student researchers who work alongside staff mentors to tackle research projects that address issues at the forefront of scientific computing. In 2023, facility hosted more than 40 students ranging from high school seniors to Ph.D. candidates. Studying topics such as exploring scientific data in virtual reality, advancing X-ray imaging of brain tissue, and improving application memory performance, the interns had the opportunity to gain hands-on experience with some of the most advanced computing technologies in the world. For a recap of the 2023 program, read the article on our website.
+ +The annual CodeGirls@Argonne Camp hosts sixth- and seventh-grade girls each summer for a five-day event dedicated to teaching them the fundamentals of coding. Taught by Argonne computing researchers and staff from the lab’s Learning Center, the in-person camp gave students an opportunity to try out creative and computational thinking through activities that include programming robots. The camp also allowed participants to meet women scientists, who use code to solve problems, and take part in a tour of the ALCF’s machine room and visualization lab. For a recap of the 2023 event, read the article on our website.
+ +In July, Argonne hosted its annual Coding for Science Camp for 30 high school freshmen and sophomores who were new to coding. The week-long camp, a joint initiative of Argonne’s Educational Programs Office and the ALCF, promotes problem solving and teamwork skills through hands-on coding activities, such as coding with Python and programming a robot, and interactions with Argonne staff members working in HPC and visualization. For a recap of the 2023 event, read the article on our website.
+ +As part of the national Computer Science Education Week (CSEdWeek) and the Hour of Code in December, ALCF staff members provide virtual talks and demos to Chicago area schools to spark interest in computer science. Working with students in classes from elementary to high school, the volunteers led a variety of activities designed to teach the basics of coding. CSEdWeek was established by Congress in 2009 to raise awareness about the need to elevate computer science education at all levels.
+ +ALCF staff members regularly serve as mentors and volunteers for Argonne’s Introduce a Girl to Engineering Day (IGED) program. The annual event gives eighth-grade students a unique opportunity to discover engineering careers alongside Argonne’s world-class scientists and engineers. Participants hear motivational presentations by Argonne engineers, tour the lab’s cutting-edge research facilities, connect with mentors, engage in hands-on engineering experiments, and compete in a team challenge.
+ +In October, nearly 100 undergraduate students from Chicagoland universities attended an all-day Intel oneAPI workshop hosted at Loyola University. Students had the opportunity to network with peers from other schools, professors, and experts from the ALCF. One highlight from the event was a graduate student panel, during which students had the opportunity to ask panelists about their work and paths to HPC. For a recap of the 2023 event, read the article on our website.
+ +ALCF staff members continued to contribute to Argonne’s annual Science Careers in Search of Women (SCSW) conference. The event hosts female high school students for a day of inspiring lectures, facility tours, career booth exhibits, and mentoring. SCSW provides participants with the unique experience to explore their desired profession or area of interest through interaction with Argonne’s women scientists and engineers.
+ + + +ALCF researchers regularly contribute to some of the world’s leading computing conferences and events to share their latest advances in areas ranging from computational science and AI to HPC software and exascale technologies. In 2023, Argonne staff participated in a wide range of events including SC23, ISC High Performance, Grace Hopper Celebration, SIAM Conference on Computational Science and Engineering, Richard Tapia Celebration of Diversity in Computing Conference, IEEE International Parallel & Distributed Processing Symposium, International Conference on Parallel Processing, International Symposium on Cluster, Cloud and Grid Computing, International Workshop on OpenCL and SYCL, Platform for Advanced Scientific Computing Conference, HPC User Forum, Energy High-Performance Computing Conference, Lustre User Group Conference, Intel eXtreme Performance Users Group Conference, Conference on Machine Learning and Systems, and more.
+ +DOE’s Exascale Computing Project (ECP) is a multi-lab initiative to accelerate the delivery of a capable exascale computing ecosystem. Launched in 2016, the ECP’s mission is to pave the way for deploying the nation’s first exascale systems by building an ecosystem encompassing applications, system software, hardware technologies, architectures, and workforce development. Researchers from the ALCF and across Argonne—one of the six ECP core labs—are helping the project achieve its ambitious goals. The laboratory has a strong presence on the ECP leadership team. It has several researchers engaged in ECP projects and working groups focused on application development, software development, and hardware technology. In the workforce development space, the ALCF is highly involved in the DOE Exascale Computing Project’s Broadening Participation Initiative and leads the Intro to HPC thrust. Additionally, the ECP provided funds for the annual Argonne Training Program on Extreme-Scale Computing (ATPESC), organized and managed by ALCF staff.
+ +ALCF staff members remain actively involved in several HPC standards and community groups that help drive improvements in the usability and efficiency of scientific computing tools, technologies, and applications. Staff activities include contributions to the Better Scientific Software, C++ Standards Committee, Cray User Group, DAOS Foundation, Energy Efficient High-Performance Computing, HPC User Forum, HPSF High-Performance Software Foundation, Intel eXtreme Performance Users Group, Khronos OpenCL and SYCL Working Groups, LDMS User Group, UXL Foundation, MLCommons (HPC, Science, and Storage Working Groups), NITRD Middleware and Grid Infrastructure Team, OCHAMI, Open Fabrics Alliance, OpenMP Architecture Review Board, OpenMP Language Committee, OSTI ORCiD Consortium Membership, Open Scalable File Systems (OpenSFS) Board, and SPEC High-Performance Group.
+ +The ALCF continued its collaboration with NERSC and OLCF to operate and maintain a website dedicated to enabling performance portability across the DOE Office of Science HPC facilities. The website serves as a documentation hub and guide for applications teams targeting systems at multiple computing facilities. The DOE computing facilities staff also collaborate on various projects and training events to maximize the portability of scientific applications on diverse supercomputer architectures.
+ +The ALCF works closely with many companies in the HPC and AI industries to develop and deploy cutting-edge hardware and software for the research community. This includes collaborating with Intel and HPE to deliver the Aurora exascale system, working with HPE to deploy the Polaris testbed supercomputer, and partnering with NVIDIA on system enhancements and training related to ThetaGPU. Such partnerships are critical to ensuring the facility’s supercomputing resources meet the requirements of the scientific computing community. In addition, the ALCF is working with several AI start-up companies, including Cerebras, Graphcore, Groq, and SambaNova, to deploy a diverse set of AI accelerators as part of the ALCF AI Testbed. The testbed, which opened up to the broader research community in 2022, is playing a key role in determining how AI accelerators can be applied to scientific research, while also allowing vendors to prepare their software and hardware for scientific AI workloads.
+ + +The ALCF’s Industry Partnerships Program is designed to expand the facility’s community of industry users by engaging with companies of all sizes, from startups to Fortune 500 corporations, that could benefit from ALCF computing systems and expertise.
+ +With state-of-the-art simulation, data analysis, and machine learning capabilities, ALCF supercomputing and AI resources enable companies to tackle R&D challenges that are too computationally demanding for traditional computing clusters. Access to ALCF systems allows industry researchers to create higher-fidelity models, achieve more accurate predictions, and quickly analyze massive amounts of data. The results enable companies to accelerate critical breakthroughs, verify uncertainties, and reduce the need to build costly prototypes.
+ +The ALCF has strengthened its industry outreach efforts through collaborations with other Argonne user facilities and divisions, including the Science and Technology Partnership Outreach (STPO) division. This approach has offered a more comprehensive understanding of the laboratory’s resources, resulting in increased engagement with a number of companies.
+ +Furthermore, the ALCF continued to play an active role in guiding the DOE Exascale Computing Project’s Industry and Agency Council, an advisory group comprising senior executives from leading U.S. companies and government agencies. This collaboration focused on deploying exascale computing to enhance products and services, fostering innovation and advancement in various industries.
+ +Ultimately, ALCF-industry collaborations help to stimulate technological and engineering advances while bolstering the nation’s innovation infrastructure.
+ +Here are some examples of how ALCF resources are helping companies to advance their R&D efforts.
+ +As part of DOE’s HPC for Energy Innovation (HPC4EI) program, researchers from 3M are working with Argonne to leverage ALCF supercomputers and AI to improve the energy efficiency of a manufacturing process used to produce melt-blown nonwoven materials. This extremely energy-intensive process is widely used by 3M to produce filters, fabrics, and insulation materials, as well as the N95 mask used for protection during the COVID-19 pandemic. By using ALCF computing resources to pair computational fluid dynamics simulations and machine learning techniques, the Argonne-3M collaboration is working to reduce energy consumption by 20%, which would save the industry nearly 50 gigawatt hours per year, without compromising material quality. To learn more about this collaboration, read this article from HPE.
+ +The ComEd energy company is partnering with Argonne to understand and prepare for the impacts of climate change. The team is using ALCF resources to dynamically downscale global climate models, providing projections and analysis for more localized areas. Their work is providing an understanding of how climate change may affect ComEd’s distribution grid and highlights the need for strategies that adapt to future climate conditions.
+ +Dow Chemical is working with Argonne on a HPC4EI project aimed at optimizing the efficiency of gas-liquid turbulent jet mixers used in chemical manufacturing. The team is using machine learning (ML) techniques in conjunction with computational fluid dynamics simulations to speed up and improve design optimization for its advanced mixing equipment. Ultimately, the team’s work will lead to an efficient framework combining ML and HPC for optimizing process equipment, and will provide a demonstration case for ML approaches to enable wider adoption across the chemical industry.
+ + + +With support from DOE’s HPC4EI program, researchers from Raytheon Technologies Research Center are working with Argonne to develop reduced-order deep learning surrogate models to capture the impact of manufacturing uncertainties on the performance of film cooling schemes used for thermal management of aviation gas turbines. Reliable film cooling drives durability and thermal efficiency in gas turbine engines, but is greatly sensitive to variations in the shape of cooling holes due to surface roughness induced by the manufacturing process. To this end, the team leveraged ALCF supercomputers and Argonne’s highly scalable nekRS solver to perform morphology-resolved computational fluid dynamics simulations of gas turbine film cooling schemes incorporating surface roughness effects. High-fidelity datasets from these simulations will be combined with data from coarse-grained simulations to develop multi-fidelity deep learning surrogate models to predict the impact of surface roughness on film cooling effectiveness. The team’s framework aims to help the company improve the fuel efficiency and durability of aircraft engines while reducing design times and costs. To learn more about this collaboration, read this article from HPE.
+ +Researchers from Solar Turbines Inc. are partnering with Argonne on an HPC4EI project aimed at modeling cost-effective carbon capture technologies for industrial gas turbines used for power generation, marine propulsion, and oil and gas production. With the goal of reducing CO2 emissions, the team is using high-fidelity large eddy simulation-based modeling to optimize the performance of a novel carbon capture system on Solar Turbines’ industrial gas turbines. Through this project, the company aims to shave months or even years off the product testing and development process, helping to accelerate the time-to-adoption of this promising new technology.
+ +As part of the ECP’s Industry Agency Council, TAE Technologies continues to explore how DOE supercomputers can help accelerate their experimental research program, which is aimed at developing a commercially viable fusion-based electricity generator. Using ECP software tools, the TAE team successfully demonstrated compute capability for two of their HPC codes, showing strong scaling results on the ALCF’s Theta supercomputer. With past INCITE awards, TAE researchers performed simulations on ALCF computing resources to better understand the microscale and macroscale kinetic plasma physics in an advanced field-reversed configuration plasma device, providing insights to help inform the design of a future prototype reactor.
+ + +Starting July of 2023, the ALCF hosted a series of training workshops that introduced researchers to the novel AI accelerators deployed at the ALCF AI Testbed. The four individual workshops demonstrated to participants the architecture and software of the SambaNova DataScale SN30 system, the Cerebras CS-2 system, the Graphcore Bow Pod system, and the GroqRack system.
+ +Held in October at Argonne, the ALCF Hands-on HPC Workshop is designed to help attendees boost application performance on ALCF systems. The three-day workshop provided an opportunity for hands-on time on Polaris and AI Testbeds focusing on porting applications to heterogeneous architectures (CPU + GPU), improving code performance, and exploring AI/ML applications development on ALCF systems. For a recap of the 2023 event, read the article on our website.
+ +In April and May, the ALCF partnered with NVIDIA to host its GPU Hackathon for the third time, a hybrid event designed to help developers accelerate their codes on ALCF resources and prepare for the INCITE call for proposals. The multi-day hackathon gave attendees access to ALCF’s Polaris system. A total of 12 teams participated this year, exploring a vast array of topics including weather research and forecasting models, colon cancer research, and methods to reconstruct large biomolecular structures. For a recap of the 2023 event, read the article on our website.
+ + + +The annual Argonne Training Program on Extreme-Scale Computing (ATPESC) marked its 11th year in 2023. The two-week event offers training on key skills, approaches, and tools needed to design, implement, and execute computational science and engineering applications on high-end computing systems, including exascale supercomputers. Organized by ALCF staff and funded by the ECP, ATPESC has a core curriculum that covers computer architectures; programming methodologies; data-intensive computing and I/O; numerical algorithms and mathematical software; performance and debugging tools; software productivity; data analysis and visualization; and machine learning and data science. More than 70 graduate students, postdocs, and career professionals in computational science and engineering attended this year’s program. ATPESC has now hosted 768 participants since it began in 2013.
+ +The Intel Center of Excellence (COE), in collaboration with ALCF’s Early Science Program, held multiday events where select ESP and ECP project teams worked on developing, porting, and profiling their codes on Sunspot with help from Intel and Argonne experts. The events were geared toward developers and emphasized using the Intel software development kit to get applications running on testbed hardware. Teams were also given the opportunity to consult with ALCF staff and provide feedback. ALCF staff also held dedicated office hours on a range of topics from programming models to profiling tools.
+ +The ALCF, in collaboration with Intel Software, continued hosting their Aurora Learning Paths series with a total of 3 separate series running in 2023. The three series covered migrating from CUDA to SYCL, accelerating Python loops with the Intel AI Analytics Toolkit, and GPU optimization using SYCL.
+ +In 2023, the ALCF, OLCF, NERSC, and ECP continued their collaboration with the Interoperable Design of Extreme-Scale Application Software (IDEAS) project to deliver a series of webinars—Best Practices for HPC Software Developers—to help users of HPC systems carry out their software development more productively. Webinar topics included writing clean scientific software, infrastructure for high-fidelity testing in HPC facilities, simplifying scientific python package installation, and how researchers can take HACC into the exascale era.
+ + + +In April, the ALCF, OLCF, and NERSC hosted a training event on the topic of workflows and workflow tools across the DOE. Through a half-day Zoom event attendees were able to find the right workflow tools and answer their questions about running workflows on supercomputers. There were hands-on examples of GNU Parallel, Parsl, FireWorks, and Balsam- all of which can be used at ALCF, NERSC, and OLCF.
+ +The ALCF Getting Started Bootcamp introduced attendees to using the Polaris computing environment. Aimed at participants who have experience using clusters or supercomputers but are new to ALCF systems, the bootcamp covered the PBS job scheduler, utilizing preinstalled environments, proper compiler and profiler use, Python environments, and running Jupyter notebooks. The webinar showed those in attendance where these tools are located and which ones to properly use.
+ +In spring, the INCITE program, ALCF, and the Oak Ridge Leadership Computing Facility (OLCF) jointly hosted two webinars on effective strategies for writing an INCITE proposal.
+ +The ALCF continued to host monthly webinars consisting of two tracks: ALCF Developer Sessions and Aurora Early Adopters Series. ALCF Developer Sessions are aimed at training researchers and increasing the dialogue between HPC users and the developers of leadership-class systems and software. Speakers in the series included developers from NVIDIA and Argonne, covering topics such as getting started on Aurora, computing with ALCF JupyterHub, and preparing XGC and HACC to run on Aurora. The Aurora Early Adopter Series is designed to introduce researchers to programming models, exascale technologies, and other tools available for testing and development work. Topics included optimizing SYCL workloads for Aurora, CUDA to SYCL migration tool, and how to apply key Intel architectural innovations via smart application of NumPy, SciPy, and Pandas techniques to achieve performance gains.
+ + + +ALCF Leadership: Michael E. Papka (Division Director), Bill Allcock (Director of Operations), Susan Coghlan (ALCF-X Project Director, Kalyan Kumaran (Director of Technology), Jini Ramprakash (Deputy Division Director), and Katherine Riley (Director of Science)
+ +Editorial Team: Beth Cerny, Jim Collins, Nils Heinonen, Logan Ludwig, and Laura Wolf
+ +Design and Production: Sandbox Studio, Chicago
+ + +This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor UChicago Argonne, LLC, nor any of their employees or officers, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof. The views and opinions of document authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof, Argonne National Laboratory, or UChicago Argonne, LLC.
+ + +The ALCF’s unique combination of supercomputing resources and expertise is helping its user community to accelerate the pace of scientific discovery and innovation.
+ +ALCF researchers authored a study that was recognized as the Most Outstanding Paper at the 2023 IWOCL & SYCLcon conference. Led by ALCF’s Thomas Applencourt, the team included ALCF colleagues Kevin Harms, Brice Videau, and Nevin Liber, as well as Bryce Allen of Argonne/University of Chicago, Amanda Dufek of NERSC, and Jefferson le Quellec and Aiden Belton-Schure of Codeplay. The team’s paper, “Standardizing complex numbers in SYCL,” provides an open-source library of complex numbers and associated math functions that can be used in computations carried out with SYCL, a key programming framework for next-generation supercomputers, including the ALCF’s Aurora exascale system.
+ + + +ALCF Director Michael Papka received the Distinguished Performance Award at the Argonne Board of Governors’ awards ceremony in September. Papka was recognized for his numerous exemplary contributions to Argonne, including his efforts to help build world-class scientific computing capabilities for open science.
+ +As part of Argonne National Laboratory’s 2023 Commercialization Excellence Awards, ALCF researchers Venkatram Vishwanath, Michael Papka, William Allcock, and Filippo Simini were recognized with a Delivering Impact Award for their collaboration with SWIFT, a global provider of financial messaging services. The collaborative effort leveraged ALCF’s supercomputing and AI resources to explore SWIFT’s synthetic transactional data streams and identify patterns, including any anomalous activity. The pattern-detection methods developed by the ALCF team provided SWIFT with the capabilities to respond to anomalies more rapidly, thereby strengthening financial infrastructure.
+ +ALCF’s Venkat Vishwanath was part of a multi-institutional team that won a Best Paper Award at the 19th IEEE International Conference on eScience. The team, which included Romain Egele of Argonne, Prasanna Balaprakash of Oak Ridge National Laboratory, and Isabelle Guyon of Google, was recognized for “Asynchronous Decentralized Bayesian Optimization for Large Scale Hyperparameter Optimization.” The researchers used the ALCF’s Polaris supercomputer to aid in the development of a new, overhead-reducing approach to Bayesian optimization — a technique for optimizing the hyperparameters of deep neural networks.
+ + + +An ALCF team won first place and best workflow at the Institute of Electrical and Electronics Engineers’ (IEEE) 2023 SciVis Contest for their development of a multi-platform scientific visualization application for analyzing data from brain plasticity simulations. Led by Tommy Marrinan of the ALCF and the University of St. Thomas, the team included ALCF colleagues Victor Mateevitsi and Michael Papka, and University of St. Thomas students Madeleine Moeller and Alina Kanayinkal. Their paper, “VisAnywhere: Developing Multi-platform Scientific Visualization Applications,” demonstrated how a single codebase can be adapted to develop visualization applications that run on a variety of display technologies, including mobile devices, laptops, high-resolution display walls, and virtual reality headsets.
+ +Riccardo Balin, a postdoctoral researcher at the ALCF, received a Postdoctoral Performance Award at Argonne National Laboratory’s Postdoctoral Research and Career Symposium in November. Balin was recognized for his innovative work in coupling high-fidelity aerodynamic flow simulations with machine learning to improve modeling capabilities for aerospace engineering.
+ + + +An ALCF-led team was recognized with the Best Paper Award at the 2023 In Situ Infrastructures for Enabling Extreme-scale Analysis and Visualization (ISAV) Workshop at the SC23 conference. Led by ALCF’s Victor Mateevitsi, the team included ALCF colleagues Joseph Insley, Michael Papka, Saumil Patel, and Silvio Rizzi; Nicola Ferrier, Paul Fischer, Yu-Hsiang Lan, and Misun Min of Argonne; and Mathis Bode, Jens Henrik Göbbert, and Jonathan Windgassen of the Jülich Supercomputing Centre (JSC) in Germany. Their paper, “Scaling Computational Fluid Dynamics: In Situ Visualization of NekRS using SENSEI,” detailed efforts to equip the NekRS computational fluid dynamics code with in situ analysis and visualization capabilities.
+ +ALCF’s Benoit Côté was part of an Argonne team awarded Best Paper at the annual Workshop on Extreme-Scale Experiment-in-the-Loop Computing (XLOOP) at the SC23 conference. The team, which also included Argonne’s Michael Prince, Doğa Gürsoy, Dina Sheyfer, Ryan Chard, Hannah Paraga, Barbara Frosik, Jon Tischler and Nicholas Schwarz, was recognized for their paper, “Demonstrating Cross-Facility Data Processing at Scale With Laue Microdiffraction.” Using a fully automated pipeline between the ALCF and Argonne’s Advanced Photon Source, the team leveraged ALCF’s Polaris supercomputer to reconstruct data obtained from a Laue microdiffraction experiment, returning reconstructed scans to the APS within 15 minutes.
+ +ALCF Director Michael Papka was named a 2023 Distinguished Member of the Association for Computing Machinery (ACM). Papka was recognized for his contributions in virtual reality, collaborative environments, scientific visualization, as well as research and operations in high-performance computing.
+ + + +Kyle Felker’s first experiences with the ALCF were via a formative undergraduate internship in Argonne’s Mathematics and Computer Science division from 2010 to 2013, wherein he worked with Intrepid, the facility’s IBM Blue Gene/P supercomputer. This spawned a drive to pursue a career in intersectional computational science involving multiple scientific domains, all tied together by their use of high-performance computing.
+ +Kyle returned to the ALCF in 2019 as a postdoctoral appointee. In 2021, he joined the lab’s Computational Science division as a catalyst for INCITE projects involving astrophysical simulations, AI, and/or plasma physics. His current research focus is on preparing machine learning models for fusion energy to be trained at scale on Aurora via the Early Science Program. On a day-to-day basis, Kyle places additional attention on maintaining software and documentation for the ALCF user community.
+ +In 2023, Kyle was a part of two teams that received Impact Argonne awards: for helping organize the AI-driven Science on Supercomputers training series for students, and for supporting the winning team of the SC22 Gordon Bell Special Prize by building and testing scalable machine learning software on the Polaris supercomputer.
+ +In addition, Kyle joined the INCITE Computational Readiness committee in 2023. In this role, he helps organize the technical evaluation processes for proposals seeking compute time on the largest ALCF and OLCF resources. Kyle also began serving as an Argonne lab practicum coordinator for the DOE’s Computational Sciences Graduate Fellowship. For the past two years, he has served on the ALCF Software Committee and the Supercomputing ML & HPC program committee.
+ +As the ALCF education outreach lead, Paige Kinsley works with staff to develop and lead the facility’s training and education activities. In this role, she is also the central point for all education, outreach, and training activities for the ALCF. Paige is committed to engaging underserved populations in STEM and increasing access to resources and knowledge about the power of science and high-performance computing to change the world.
+ +In 2023, she co-led a team of 20 organizers and trainers from national labs, academia, and a non-profit to run the first pilot of the Introduction to HPC Bootcamp. As a team, they developed engaging and inclusive materials for a diverse set of students. Based on student feedback, the bootcamp increased their interest in HPC and national lab careers, and a number of them have applied for DOE internship programs.
+ +Paige also stepped in as a founding member of the Women in HPC Chicago Chapter, in collaboration with Argonne and UIC staff. The Chicago WHPC chapter provides a network and resources for women and underrepresented people in HPC to make connections, develop support networks, and find opportunities in the field.
+ +She also supported the continued development of tours at ALCF, with the goal of making Argonne and the ALCF more accessible to the future workforce. As part of her work to develop an inclusive culture at Argonne, in HPC, and beyond, she developed and led workshops on best practices in communications for Argonne staff.
+ + + +Janet Knowles joined the ALCF Visualization and Data Analytics team as a principal software engineering specialist in 2014 after coming to the facility the previous year to work on special projects. Janet’s work has focused primarily on producing scientific visualizations with an emphasis on expanding the ALCF’s repertoire of high-quality rendering techniques. Her research interests include user interface design and interaction, immersive environments, 3D modeling, computer animation, and scientific visualization.
+ +In previous years, Janet successfully spearheaded an effort to expand the ALCF visualization software toolset to include SideFX Houdini, a 3D procedural animation software. While Houdini is commonly used for video effects in the movie industry and increasingly, AAA-rated game development, it is just beginning to be leveraged for scientific visualization. In 2023, Janet continued her exploration of Houdini, taking advantage of its procedural-based model to support INCITE projects such as IonTransES, which was included as a DOE scientific highlight. She always enjoys contributing to the lab’s outreach programming, and in 2023, she was able to participate in Science Careers in Search of Women, Hour of Code, and the Argonne Open House.
+ +Janet enjoys running marathons and ultramarathons, skiing, playing electric guitar (occasionally with her colleague Silvio Rizzi on keys), cooking and baking, and playing video games.
+ +Sean Koyama joined the ALCF in 2022 as an HPC systems administration specialist. He is responsible for integrating scientific software stacks into the computing environments on ALCF supercomputers. His daily duties include the installation, testing, and support of crucial applications for scientific research. A significant focus of his work is leveraging the Spack HPC package manager to create automated frameworks for building site-customized programming environments.
+ +In 2023, Sean deployed the scientific software stacks on Aurora and Aurora’s test and development system, Sunspot. He prioritized the installation of critical scientific applications on each supercomputer, enabling early science users to begin collecting data and optimizing applications. He also began a collaborative effort to build up an integrated programming environment on Aurora which is owned and configured by the ALCF. This approach is designed to be portable to other supercomputers such as Polaris, and its implementation would lead to a more uniform experience for users and more readily maintainable software environments.
+ + + +Marieme Ngom joined ALCF in May 2023 as an assistant computer scientist in the Data Science team. Since joining ALCF, Marieme’s research has been focused on implementing and testing scalable Gaussian Processes (GP) models for different applications including materials sciences on ALCF’s machines. In addition, Marieme has been working on assessing and enforcing the diversity, stability, and robustness of Argonne’s based ML models.
+ +In parallel, Marieme helped organize the ALCF Hands-on HPC workshop and gave a tutorial on deep learning at the 2023 Argonne Training Program on Extreme-Scale Computing. Marieme also co-leads the CELS Technical Women group and has co-organized different meetups and activities geared towards CELS technical women
+ +Filippo Simini joined ALCF in 2020 as a computer scientist in the Data Science team. His work focuses on helping develop, run, and evaluate HPC applications that include machine learning and artificial intelligence components, often combined with traditional science and engineering simulations. He served as a point of contact for Early Science Program (ESP) projects targeting the ALCF’s exascale system, Aurora, and collaborated with the ESP research teams to enable complex workflows combining simulation and learning components. Filippo is also helping to deploy, benchmark, and scale Graph Neural Network (GNN) models on ALCF resources and AI Testbed systems: GNN models are relevant to many scientific applications in which the system of interest can be represented as a (possibly time-varying) graph describing the binary interactions between the system’s components.
+ +Filippo’s interests include generative modeling, privacy-preserving AI, and anomaly detection. These research themes are at the core of an ongoing collaboration between ALCF, Swift (Society for Worldwide Interbank Financial Telecommunication), and Kove, whose goal is to identify anomalous and fraudulent patterns in Swift’s data streams of financial transactions. The project has been selected as the winner of a Delivering Impact category within the Argonne Commercialization Excellence Award program in 2023. Filippo also served as a team mentor at INCITE Hackathons and co-organized, presented, and provided support at various ALCF training events.
+ + + +ALCF supercomputing resources support large-scale, computationally intensive projects aimed at solving some of the world’s most complex and challenging scientific problems.
+ + + +System Name | +Purpose | +Architecture | +Peak Performance | +Processors per Node | +GPUs per Node | +Nodes | +Cores | +Memory | +Interconnect | +Racks | +
---|---|---|---|---|---|---|---|---|---|---|
Polaris | +Purpose Science Campaigns | +Architecture HPE Apollo 6500 Gen10+ | +Peak Performance 25 PF; 44 PF (Tensor Core double precision) | +Processors per Node 1 3rd Gen AMD EPYC | +GPUs per Node 4 NVIDIA A100 Tensor Core | +Nodes 560 | +Cores 17,920 | +Memory 280 TB (DDR4); 87.5 TB (HBM) | +Interconnect HPE Slingshot 10 with Dragonfly configuration | +Racks 40 | +
Theta: KNL Nodes | +Purpose Science Campaigns | +Architecture Intel-Cray XC40 | +Peak Performance 11.7 PF | +Processors per Node 1 64-core, 1.3-GHz Intel Xeon Phi 7230 | +GPUs per Node – | +Nodes 4,392 | +Cores 281,088 | +Memory 843 TB (DDR4); 70 TB (HBM) | +Interconnect Aries network with Dragonfly configuration | +Racks 24 | +
Theta: GPU Nodes | +Purpose Science Campaigns | +Architecture NVIDIA DGX A100 | +Peak Performance 3.9 PF | +Processors per Node 2 AMD EPYC 7742 | +GPUs per Node 8 NVIDIA A100 Tensor Core | +Nodes 24 | +Cores 3,072 | +Memory 26 TB (DDR4); 8.32 TB (GPU) | +Interconnect NVIDIA QM8700 InfiniBand | +Racks 7 | +
Cooley | +Purpose Data Analysis and Visualization | +Architecture Intel Haswell | +Peak Performance 293 TF | +Processors per Node 2 6-core, 2.4-GHz Intel E5–2620 | +GPUs per Node 1 NVIDIA Tesla K80 | +Nodes 126 | +Cores 1,512 | +Memory 47 TB (DDR4); 3 TB (GDDR5) | +Interconnect FDR InfiniBand | +Racks 6 | +
The ALCF AI Testbed provides an infrastructure of next-generation AI-accelerator machines that allows researchers to evaluate the usability and performance of machine learning-based applications running on the systems. AI testbeds include:
+ +System Name | +System Size | +Compute Units per Accelerator | +Estimated Performance of a Single Accelerator (TFlops) | +Software Stack Support | +Interconnect | +
---|---|---|---|---|---|
Cerebras CS-2 | +2 Nodes (Each with a Wafer-Scale Engine) Including MemoryX and SwarmX | +850,000 Cores | +> 5,780 (FP16) | +Cerebras SDK, TensorFlow, PyTorch | +Ethernet-based | +
SambaNova Cardinal SN30 | +64 Accelerators (8 Nodes and 8 Accelerators per Node) | +1,280 Programmable Compute Units | +>660 (BF16) | +SambaFlow, PyTorch | +Ethernet-based | +
GroqRack | +72 Accelerators (9 Nodes and 8 Accelerators per Node) | +5,120 Vector ALUs | +>188 (FP16) >750 (INT8) | +GroqWare SDK, ONNX | +RealScale | +
Graphcore Bow Pod-64 | +64 Accelerators (4 Nodes and 16 Accelerators per Node) | +1,472 Independent Processing Units | +>250 (FP16) | +PopART, TensorFlow, PyTorch, ONNX | +IPU Link | +
Habana Gaudi | +16 Accelerators (2 Nodes and 8 Accelerators per Node) | +8 TPC + GEMM Engine | +>150 (FP16) | +SynapseAI, TensorFlow, PyTorch | +Ethernet-based | +
ALCF disk storage systems provide intermediate-term storage for users to access, analyze, and share computational and experimental data. Tape storage is used to archive data from completed projects.
+ + + +System Name | +File System | +Storage System | +Usable Capacity | +Sustained Data Transfer Rate | +Disk Drives | +
---|---|---|---|---|---|
Eagle | +File System Lustre | +Storage System HPE ClusterStor E1000 | +Usable Capacity 100 PB | +Sustained Data Transfer Rate 650 GB/s | +Disk Drives 8,480 | +
Grand | +File System Lustre | +Storage System HPE ClusterStor E1000 | +Usable Capacity 100 PB | +Sustained Data Transfer Rate 650 GB/s | +Disk Drives 8,480 | +
Theta-FSO | +File System Lustre | +Storage System HPE Sonexion L300 | +Usable Capacity 9 PB | +Sustained Data Transfer Rate 240 GB/s | +Disk Drives 2,300 | +
Swift | +File System Lustre | +Storage System All NVMe Flash Storage Array | +Usable Capacity 123 TB | +Sustained Data Transfer Rate 48 GB/s | +Disk Drives 24 | +
Tape Storage | +File System – | +Storage System LT06 and LT08 Tape Technology | +Usable Capacity 300 PB | +Sustained Data Transfer Rate – | +Disk Drives – | +
InfiniBand enables communication between system I/O nodes and the ALCF’s various storage systems. The Production HPC SAN is built upon NVIDIA Mellanox High Data Rate (HDR) InfiniBand hardware. Two 800-port core switches provide the backbone links between 80 edge switches, yielding 1600 total available host ports, each at 200 Gbps, in a non-blocking fat-tree topology. The full bisection bandwidth of this fabric is 320 Tbps. The HPC SAN is maintained by the NVIDIA Mellanox Unified Fabric Manager (UFM), providing Adaptive Routing to avoid congestion, as well as the NVIDIA Mellanox Self-Healing Interconnect Enhancement for InteLligent Datacenters (SHIELD) resiliency system for link fault detection and recovery.
+ +When external communications are required, Ethernet is the interconnect of choice. Remote user access, systems maintenance and management, and high-performance data transfers are all enabled by the Local Area Network (LAN) and Wide Area Network (WAN) Ethernet infrastructure. This connectivity is built upon a combination of Extreme Networks SLX and MLXe routers and NVIDIA Mellanox Ethernet switches.
+ +ALCF systems connect to other research institutions over multiple 100 Gbps Ethernet circuits that link to many high performance research networks, including local and regional networks like the Metropolitan Research and Education Network (MREN), as well as national and international networks like the Energy Sciences Network (ESnet) and Internet2.
+ +Through Argonne’s Joint Laboratory for System Evaluation (JLSE), the ALCF provides access to leading-edge testbeds for exploratory research aimed at evaluating future +extreme-scale computing systems, technologies, and capabilities. JLSE testbeds include:
+ +The ALCF’s HPC systems administrators manage and support all ALCF computing systems, ensuring users have stable, secure, and highly available resources to pursue their scientific goals. This includes the ALCF’s production supercomputers, AI accelerators, supporting system environments, storage systems, and network infrastructure. The team’s software developers create tools to support the ALCF computing environment, including software for user account and project management, job failure analysis, and job scheduling. User support specialists provide technical assistance to ALCF users and manage the workflows for user accounts and projects. In the business intelligence space, staff data architects assimilate and verify ALCF data to ensure accurate reporting of facility information.
+ +Computational scientists with multidisciplinary domain expertise work directly with ALCF users to maximize and accelerate their research efforts. In addition, the ALCF team applies broad expertise in data science, machine learning, data visualization and analysis, and mathematics to help application teams leverage ALCF resources to pursue data-driven discoveries. With a deep knowledge of the ALCF computing environment and experience with a wide range of numerical methods, programming models, and computational approaches, staff scientists and performance engineers help researchers optimize the performance and productivity of simulation, data, and learning applications on ALCF systems.
+ +The ALCF team plays a key role in designing and validating the facility’s next-generation supercomputers. By collaborating with compute vendors and the performance tools community, staff members ensure the requisite programming models, tools, debuggers, and libraries are available on ALCF platforms. The team also helps manage Argonne’s Joint Laboratory for System Evaluation, which houses next-generation testbeds that enable researchers to explore and prepare for emerging computing technologies. ALCF computer scientists, performance engineers, and software engineers develop and optimize new tools and capabilities to facilitate science on the facility’s current and future computing resources. This includes the deployment of scalable machine learning frameworks, in-situ visualization and analysis capabilities, data management services, workflow packages, and container technologies. In addition, the ALCF team is actively involved in programming language standardization efforts and contributes to cross-platform libraries to further enable the portability of HPC applications.
+ +ALCF staff members organize and participate in training events that prepare researchers for efficient use of leadership computing systems. They also participate in a wide variety of educational activities aimed at cultivating a diverse and skilled HPC community and workforce in the future. In addition, staff outreach efforts include facilitating partnerships with industry and academia, and communicating the impactful research enabled by ALCF resources to external audiences.
+ + +The ALCF is providing supercomputing and AI resources and capabilities to enable pioneering research at the intersection of simulation, big data analysis, and machine learning.
+ +The ALCF’s testbed of AI accelerators is enabling the research community to advance the use of AI for data-intensive science.
+ +In 2023, the ALCF AI Testbed expanded its offerings to the research community, with the addition of new Graphcore and Groq systems as well as upgraded Cerebras and SambaNova machines.
+ +The testbed is a growing collection of some of the world’s most advanced AI accelerators available for open science. Designed to enable researchers to explore next-generation machine learning applications and workloads to advance AI for science, the systems are also helping the facility to gain a better understanding of how novel AI technologies can be integrated with traditional supercomputing systems powered by CPUs and GPUs.
+ +The testbed’s newest additions give the ALCF user community access to new leading-edge platforms for data-intensive research projects.
+ +Together, the ALCF AI Testbed systems provide advanced data analysis capabilities that also support DOE’s efforts to develop an Integrated Research Infrastructure that seamlessly connects advanced computing resources with data-intensive experiments, such as light sources and fusion experiments, to accelerate the pace of discovery.
+ + + + + +Scientists are leveraging the ALCF AI Testbed systems for a wide range of data-driven research campaigns. The following summaries provide a glimpse of some of the efforts that are benefitting from the AI accelerators’ advanced capabilities.
+ +Argonne researchers are leveraging multiple ALCF AI Testbed systems to accelerate and scale deep learning models to aid the analysis of X-ray data obtained at Argonne’s Advanced Photon Source (APS). The team is using the ALCF AI Testbed to train models — too large to run on a single GPU — to generate improved 3D images from x-ray data.
+ +They are also exploring the use of the ALCF’s AI platforms for fast-inference applications. Their work has yielded some promising initial results, with various models (PtychoNN, BraggNN, and AutoPhaseNN) showing speedups over traditional supercomputers. ALCF and vendor software teams are collaborating with the APS team to achieve further advances.
+ +Graph neural networks (GNNs) are powerful machine learning tools that can process and learn from data represented as graphs. GNNs are being used for research in several areas, including molecular design, financial data, and social networks. ALCF researchers are working to compare the performance of GNN models across multiple ALCF AI Testbed accelerators. With a focus on inference, the team is examining which GNN-specific operators or kernels, as a result of increasing numbers of parameters or batch sizes, can create computational bottlenecks that affect overall runtime.
+ +An Argonne-led team relied on the ALCF AI Testbed when using LLMs to discover SARS-CoV-2 variants. Their workflow leveraged AI accelerators alongside GPU-accelerated systems including the ALCF’s Polaris supercomputer. One of the critical problems the team had to overcome was how to manage extensive genomic sequences, the size of which can overwhelm many computing systems when establishing foundation models. The learning-optimized architecture of the ALCF AI Testbed systems was key for accelerating the training process. The team’s research resulted in the 2022 Gordon Bell Award Special Prize for COVID-19 Research.
+ +Argonne scientists are leveraging the ALCF AI Testbed to aid in the development of an application that combines two types of computations for research into potential battery materials: (1) running physics simulations of molecules under redox and (2) training a machine learning model that predicts that energy quantity. The application uses the machine learning model to predict the outcomes of the redox simulations, helping to identify molecules with the desired capacity for energy storage. The ALCF AI Testbed has enabled shortened latency when cycling between the execution of a new calculation that yields additional training data and when that model is used to select the next calculation.
+ +Now that Aurora is fully assembled, ECP and ESP team members are beginning to transition their work to the supercomputer to ready their applications for full system runs. Here are some early performance results on Aurora.
+ ++ | + + +
+ +The structure of the human brain is enormously complex and not well understood. Its 80 billion neurons, each connected to as many as 10,000 other neurons, support activities from sustaining vital life processes to defining who we are. From high-resolution electron microscopy images of brain tissue, computer vision and machine learning techniques operating at the exascale can reveal the morphology and connectivity of neurons in brain tissue samples, informing future studies of the structure and function of mammalian brains.
+ +Connectomics stresses many boundaries: high-throughput electron microscopy technology +operating at nanometer resolution; tens of thousands of images, each with tens of gigapixels; +accuracy sufficient to capture minuscule synaptic detail; computer vision methods to align +corresponding structures across large images; and deep learning networks that can trace +narrow axons and dendrites over large distances. Multiple applications contribute to the 3D reconstruction of neurons; the most demanding of them perform image alignment and segmentation.
+ +Before the 3D shape of neurons can be reconstructed, the 2D profiles of objects must be aligned between neighboring images in an image stack. Image misalignment can occur when tissue samples are cut into thin sections, or during imaging on the electron microscope. The Feabas application (developed by collaborators at Harvard) uses template matching and feature matching techniques for coarse and fine-grained alignment, using a network-of-springs approach to produce optimal linear and local non-linear image transformations, to align the 2D image content between sections.
+ +Deep learning models for connectomic reconstruction have been trained on Aurora on up to 512 nodes, demonstrating performance increases up to 40 percent.
+ +Reconstructions have been run with these models on up to 1024 nodes on Aurora, with multiple inference processes per GPU, to produce a segmentation of a teravoxel of data. Projecting from these runs to the full machine, the researchers anticipate being able to segment a petavoxel dataset on Aurora imminently.
+ + + +Connectomics today is leveraging innovations in imaging, supercomputing, and artificial intelligence to improve our understanding of how the brain’s neurons are arranged and connected; this is becoming possible today due to exascale computing on Aurora. The techniques developed guarantee that computing will scale from cubic millimeters of brain tissue today, to a cubic centimeter whole mouse brain in the future, and to larger volumes of human brain tissue. As imaging technology advances, computing will need to achieve high performance on post-exascale machines to avoid becoming the bottleneck.
+ +The work done to prepare this project for exascale will also benefit other exascale system users: with the electron microscopy algorithms under development, for example, promising broad application to x-ray data, especially with the upcoming upgrade to Argonne’s Advanced Photon Source, a DOE Office of Science User Facility.
+ ++ | + + +
+ +A deep learning-guided computer vision model featuring high-resolution imaging data and corresponding segmentation labels originating from a high-energy neutrino physics experiment, CosmicTagger is used by researchers working in high-energy particle physics to detect neutrino interactions from other cosmic particles and background noise. A key benchmark for high-performance computing systems, CosmicTagger is run in both PyTorch and TensorFlow on multiple systems, representing a variety of architectures.
+ +The CosmicTagger project deals with the detection of neutrino interactions in a detector overwhelmed by cosmic particles. The goal is to differentiate and classify each pixel so as to separate cosmic pixels, background pixels, and neutrino pixels in a neutrinos dataset. The technique uses multiple 2D projections of the same image, with each event generating three images of raw data. The training model utilizes a UResNet architecture for multi-plane semantic segmentation and is available in both PyTorch and Tensorflow with single node and distributed-memory multi-node implementations.
+ +Running on Sunspot, the Aurora test and development system, CosmicTagger achieved node throughput of 280 samples per second, representing a more than fivefold increase over other compared systems’ throughput. Running the code on 512 nodes of Aurora achieved 83 percent scaling efficiency per node, using PyTorch and the distributed deep learning training framework Horovod.
+ + + +Deep learning has enabled state-of-the-art results in high-energy neutrino physics, with this application achieving substantially improved background particle rejection compared to classical techniques. Deploying CosmicTagger on Aurora will enable training and inference of the model at the highest resolution data and with the most scientifically accurate model.
+ +Additionally, the Short Baseline Neutrino Detector, which originated the CosmicTagger application in collaboration with Argonne, is expected to begin operations in 2024. CosmicTagger will be beneficial in aiding the scientific analysis of what is expected to be the biggest, highest-resolution beam neutrino dataset ever collected.
+ + ++ | + + +
+ +High-throughput screening of extensive compound datasets so as to identify advantageous properties—such as the ability to interact with relevant biomolecules (including proteins)—represents a promising direction in drug discovery for the treatment of diseases like cancer as well as for response to epidemics like SARS-CoV-2. However, traditional structural approaches for assessing binding affinity, such as free energy methods or molecular docking, pose significant computational bottlenecks when dealing with quantities of data of this magnitude. To address this, researchers have developed a docking surrogate called the SMILES transformer (ST), which learns molecular features from the SMILES (Simplified Molecular Input Line Entry System) representation of compounds and approximates their binding affinity.
+ +SMILES data are first tokenized using a well-established SMILES-pair tokenizer and then fed into a transformer model to generate vector embeddings for each molecule, effectively capturing the essential information. These extracted embeddings are subsequently fed into a regression model to predict the binding affinity.
+ +Leveraging ALCF leadership-computing resources, the researchers devised a workflow to scale model training and inference across multiple supercomputer nodes. To evaluate the performance and accuracy of the workflow, the team conducted experiments using molecular docking binding affinity data on multiple receptors, comparing ST with another state-of-the-art docking surrogate.
+ +Drug screening inference scaled to 128 nodes on Aurora, screening approximately 11 billion drug molecules per hour. Screening inference was then scaled to 256 nodes on Aurora, screening approximately 22 billion drug molecules per hour. These results indicate that Aurora enabled strong performance improvements over other systems: the workflow facilitated screening some 3 billion compounds per hour when scaled to 48 nodes on Polaris. Assuming linear scaling, researchers could expect about a trillion compounds screened per hour if using all compute resources in Aurora.
+ +SST showed comparable accuracy to state-of-the-art surrogate models, with r-squared values between 70 and 90 percent on multiple test protein receptors, affirming the capability of SST to learn molecular information directly from language-based data. One significant advantage of the SST approach is its notably faster tokenization preprocessing compared to alternative preprocessing methods such as generating molecular descriptors. Furthermore, SST predictions emphasize several molecular motifs that have previously been confirmed to interact with residues in their target binding pockets.
+ + + +The team’s approach presents an efficient means for screening otherwise cumbersomely extensive compound databases for molecular properties that could prove useful for targeting cancer and other diseases. Aurora system capabilities will make possible screening 40 to 60 billion candidate compounds for potential synthesis. A key future direction for the workflow involves integrating de-novo drug design, enabling the researchers to scale their efforts to explore the limits of synthesizable compounds within chemical space.
+ + ++ | + + +
+ +GAMESS, or General Atomic and Molecular Electronic Structure System, is a general-purpose electronic structure code for computational chemistry. Full-scale utilization of the Aurora system will enable GAMESS users to carry out demanding tasks like computing the energies and reaction pathways of catalysis processes within a large silica nanoparticle.
Through computation or a well-defined representative heterogeneous catalysis problem comprising mesoporous silica nanoparticles, GAMESS has demonstrated the capability to model physical systems requiring chemical interactions that involve many thousands of atoms, indicating a new ability to model complex chemical processes.
GAMESS is written in Fortran and uses OpenMP to off-load the code onto graphical processing units (GPUs). The computations are done using the effective fragment molecular orbital (EFMO) framework in conjunction with the resolution-of-the-identity second-order Møller–Plesset perturbation (RI-MP2) method. The project is developing ab-initio fragmentation methods to more efficiently tackle problems in computational chemistry, such as heterogeneous catalysis, and has the ultimate goal of enabling quantum chemistry to be applied to extremely large systems of interest in catalysis and energy research. Programming models include linear algebra libraries and CUDA, as well as HIP/DPC++ and OpenMP.
+ +To take full advantage of exascale architectures, it is critical that application software be developed that can exploit multiple layers of parallelism and take advantage of emerging low-power architectures that dramatically lower energy and power costs without significant negative impacts on time-to-solution. To attain exascale performance, GAMESS will be refactored in accordance with modern computer hardware and software, thereby greatly expanding the capabilities of the codeveloped C++ libcchem code.
+ +In 2023, the GAMESS team leveraged the Aurora system to perform simulations of silica nanoparticles surrounded by thousands of water molecules, scaling on up to 512 nodes of the system. Results have demonstrated performance some 2.5 times greater than was achieved using other tested architectures.
+ + + +Full-scale utilization of the Aurora system will enable GAMESS users to carry out demanding tasks like computing the energies and reaction pathways of catalysis processes within a large silica nanoparticle.
+ ++ | + + +
+ +HACC (Hardware/Hybrid Accelerated Cosmology Code) is a cosmological N-body and hydrodynamics simulation code designed to run at extreme scales on all HPC systems, +especially those operated by DOE national laboratories. HACC computes the complicated emergence of structure in the universe across cosmological history, the core of the code’s +functionality consisting of gravitational calculations along with the more recent addition of gas dynamics and astrophysical subgrid models. The solvers are integrated with a large set of sophisticated analysis methods encapsulated within HACC’s CosmoTools library.
+ +HACC is structured to remain mostly consistent across different architectures such that it requires only limited changes when ported to new hardware; the inter-nodal level of code—the level of code that communicates between nodes—is nearly invariant from machine to machine. Consequently, the approach taken to porting HACC effectively reduces the problem to the node level, thereby permitting concentration of effort on optimizing critical code components with a full awareness of the actual hardware.
+ +In bringing HACC to exascale, the developers have aimed to evaluate Aurora’s early hardware and software development kit on a set of more than 60 complex kernels primarily written in CUDA or otherwise under active development, minimize divergence between CUDA and SYCL versions of the codebase, identify configurations and implementation optimizations specific to Intel GPUs, and identify more generally applicable implementation optimizations.
+ +Versions of HACC being developed for exascale systems incorporate basic gas physics (hydrodynamics) to enable more detailed studies of structure formation on the scales of galaxy clusters and individual galaxies. These versions also include sub-grid models that integrate phenomena like star formation and supernova and AGN feedback, which means the addition of more performance-critical code sections that also run on GPUs. A GPU implementation of HACC with hydrodynamics was previously developed for the Titan and Summit systems using OpenCL and CUDA. All the GPU versions of the code have been rewritten to target Aurora.
+ +HACC simulations have been performed on Aurora in runs using as many as 1920 nodes. Visualizations of results generated on Aurora illustrate the large-scale structure of the universe. Single-GPU performance on Aurora exceeds that of compared systems: Figure-of-Merit assessments measuring particle-steps per second used 33 million particles per GPU and saw performance increases ranging from 15 to 50 percent.
+ + + +Modern cosmology provides a unique window to fundamental physics, and has led to remarkable discoveries culminating in a highly successful model for the dynamics of the Universe. Simulations and predictions enabled by the HACC code deployed at exascale will help deepen our understanding of the structure of the universe and its underlying physics. +Furthermore, new generations of cosmological instruments, such as the Vera C. Rubin Observatory, will depend on exascale systems in order to interpret the measurements; exascale cosmological simulations developed through HACC will enable researchers to simultaneously analyze observational data from state-of-the-art telescopes to test different theories of cosmological evolution.
+ ++ | + + +
+ +As the original NWChem code—an ab initio computational chemistry software package which includes quantum chemical and molecular dynamics functionality— is nearly a quarter-century old, in updating the application, the NWChemEx developers decided to rewrite from the ground up, with the ultimate goal of providing the framework for a next-generation molecular modeling package. The new, exascale-ready code is capable of enabling chemistry research on a variety of leading-edge computing systems.
+ +The NWChemEx developers aim to restructure core functionality—including the elimination of longstanding bottlenecks associated with the generally successful NWChem code—concurrent with the production of sophisticated physics models intended to leverage the computing power promised by the exascale era. As one component of this strategy, the developers have adopted the Aurora-supported DPC++ programming model as one of its development platforms.
+ +From a design point-of-view, the development team gives equal weight and consideration to physics models, architecture, and software structure, in order to fully harness large-scale HPC systems. To this end, NWChemEx incorporates numerous modern software-engineering techniques for C++, while GPU compatibility and support have been planned since the project’s initial stages, thereby orienting the code to the demands of exascale as matter of constitution.
+ +In order to overcome prior communication-related bottlenecks, the developers have localized communication to the greatest possible extent.
+ +To help localize communication and thereby reduce related bottlenecks, NWChemEx is being geared such that CPUs handle communication protocols as well as any other non-intensive components (that is conditional-structure-based algorithms). Anything else—anything “embarrassingly parallel” or computationally expensive—is to be processed by GPU.
+ +For Intel hardware, the developers employ Intel’s DPC++ Compatibility Tool to port any existing optimized CUDA code and translate it to DPC++. The Compatibility Tool is sophisticated enough that it reliably determines apposite syntax in translating abstractions from CUDA to SYCL, greatly reducing the developers’ burden. Subsequent to translation, the developers finetune the DPC++ code to remove any redundancies, inelegancies, or performance issues introduced by automation.
+ + + +The NWChemEx simulations were carried out in 2023 for both single-GPU performance evaluations and large-scale demonstration runs involving up to 512 nodes. Canonical coupled cluster singles and doubles (CCSD) methods for molecular description showed faster performance on Aurora than was achieved using previous-generation systems, while the domain-based local pair natural orbital coupled-cluster method with single-, double- and perturbative triple excitations (DLPNO-CCSD) gave approximately the same performance compared to another tested system.
+ +The NWChemEx project, when realized, has the potential to accelerate the development of next-generation batteries, drive the design of new functional materials, and advance the simulation of combustive chemical processes, in addition to addressing a wealth of other pressing challenges at the forefront of molecular modeling, including the development of stress-resistant biomass feedstock and the development of energy-efficient catalytic processes to convert biomass-derived materials into biofuels.
+ ++ | + + +
+ +OpenMC, a Monte Carlo neutron and photon transport simulation code originally written for CPU-based high-performance computers (HPC) and which is capable of using both distributed-memory (MPI) and shared-memory (OpenMP) parallelism, simulates the stochastic motion of neutral particles through a model that, as a representation of a real-world experimental setup, can range in complexity from a simple slab of radiation-shielding material to a full-scale nuclear reactor. Researchers have been working to port the application to graphics processor unit- (GPU-) based HPC systems.
+ +The GPU-oriented version of OpenMC has been completed and is already running on a number of GPU-based supercomputers, including Sunspot—the ALCF’s Aurora testbed and development system—and the ALCF’s NVIDIA-based Polaris. While the team’s goal is focused on honing performance on Aurora, the OpenMP offloading model has resulted in strong performance on every machine on which it was deployed, irrespective of vendor.
+ +Current full-machine projections for OpenMC running on Aurora, based on preliminary simulation runs performed on Sunspot, are in the ballpark of 20 billion particle histories per second—indicating a speedup by some 2500x over what could be achieved at full-machine scale at the time of the ECP’s inception (the goal for which had been a fiftyfold speedup).
+ +The code has been run across multiple types of GPUs, with large performance gains—all over 2x—demonstrated on the Aurora testbed, Sunspot, over other systems. The increases have been consistent across single-GPU, full-node, and multi-node comparisons performed in 2023 on as many as 96 GPUs.
+ + + +The ECP-supported ExaSMR project aims to use OpenMC to model the entire core of a nuclear reactor, generating virtual reactor simulation datasets with high-fidelity, coupled physics models for reactor phenomena that are truly predictive, filling in crucial gaps in experimental and operational reactor data. The extreme performance gains OpenMC has achieved on GPUs is finally bringing within reach a much larger class of problems that historically were deemed too expensive to simulate using Monte Carlo methods.
+ + ++ | + + +
+ +Developed in tandem with the ECP-supported Whole Device Model Application project—which aims to build a high-fidelity model of magnetically confined fusion plasmas to plan experiments with ITER—XGC is a gyrokinetic particle-in-cell code (with an unstructured 2D grid and structured toroidal grid) used to perform large-scale simulations on DOE supercomputers, and optimized for treating edge plasma.
+ +Specializing in edge physics and realistic geometry, XGC is capable of solving boundary multiscale plasma problems across the magnetic separatrix (that is, the boundary between the magnetically confined and unconfined plasmas) and in contact with material wall called divertor, using first-principles-based kinetic equations.
+ +To prepare for the next generation of high-performance computing, the code is being re-implemented for exascale using a performance-portable approach. Running at exascale will yield unique computational capabilities, some of which carry the potential for transformational impacts on fusion science: exascale expansion will make it possible to study, for instance, a larger and more realistic range of dimensionless plasma parameters than has ever been achieved, along with the energy-angle distribution of plasma particles impinging upon the material wall and the full spectrum of kinetic micro-instabilities that control the quality of energy confinement in a toroidal plasma. Further, exascale will enable physics modeling that incorporates multiple-charge tungsten ion species — impurities discharged from the tokamak vessel walls that impact edge-plasma behavior and fusion performance in the core-plasma through migration across the magnetic separatrix. Toward this end, XGC will support a wide array of additional features and modes, including delta-f and full-f, electrostatic and electromagnetic, axisymmetric, neutral particles with atomic cross-sections, atomic number transitions among different impurity states, and coupling physics in constant development.
+ +Optimization for exascale has required both GPU offloading and algorithmic flexibility. XGC uses the Kokkos programming model as its portability layer, with different backends. Researchers evaluated system performance for a gyrokinetic particle-in-cell simulation of tokamak plasma using C++ to predict ITER fusion reactor plasma behavior with Tungsten impurity ions sputtered from the divertor. Performance on the Aurora test and development system, Sunspot, yielded scaling performance comparable to that of other GPU-based systems, while single-GPU performance was as much as 46 percent greater than was achieved with other systems.
+ + + +The resulting exascale application will be unique in its computational capabilities. Impacts in fusion science will potentially be transformational. For example, this project will enable a much larger and more realistic range of dimensionless plasma parameters than ever before, with the core and the edge plasma strongly coupled at a fundamental kinetic level based on the gyrokinetic equations; this is to be accomplished by providing the energy-angle distribution of plasmas hitting the material wall, calculating the critically needed Tungsten penetration into the burning core, and assessing the rich spectrum of kinetic micro-instabilities that control the quality of energy confinement in a toroidal plasma (e.g., tokamaks, stellarators).
+ + +The ALCF made significant progress in deploying its exascale supercomputer in 2023, completing the hardware installation, registering early performance numbers, and supporting early science teams’ initial runs on the system.
+ +In June 2023, the installation of Aurora’s 10,624th and final blade marked a major milestone in the efforts to deploy the ALCF’s exascale supercomputer. With the full machine in place and powered on, the Aurora team was able to begin the process of stress-testing, stabilizing, and optimizing the massive system to prepare for acceptance and full deployment in 2024.
+ +Built in partnership with Hewlett Packard Enterprise (HPE), Aurora is one of the fastest supercomputers in the world, with a theoretical peak performance of more than two exaflops of computing power. It is also one of the world’s largest supercomputers, occupying 10,000 square feet and weighing 600 tons. The system is powered by 21,248 Intel Xeon CPU Max Series processors and 63,744 Intel Data Center GPU Max Series processors. Notably, Aurora features more GPUs and more network endpoints in its interconnect technology than any system to date. To pave the way for a machine of this scale, Argonne first had to complete some substantial facility upgrades, including adding new data center space, mechanical rooms, and equipment that significantly increased the building’s power and cooling capacity.
+ +As is the case with all DOE leadership supercomputers, Aurora is a first-of-its-kind system equipped with leading-edge technologies that are being deployed at an unprecedented scale. This presents unique challenges in launching leadership-class systems as various hardware and software issues only emerge when approaching full-scale operations. The Aurora team, which includes staff from Argonne, Intel, and HPE, continues work to stabilize the supercomputer, which includes efforts such as optimizing the flow of data between network endpoints.
+ +In November, Aurora demonstrated strong early performance numbers while still in the stabilization period, underscoring its immense potential for scientific computing.
+ +At the SC23 conference, the supercomputer made its debut on the semi-annual TOP500 List with a partial system run. Using approximately half of the system’s nodes, Aurora achieved 585.34 petaflops, earning the #2 overall spot. In addition, Aurora’s storage system, DAOS, earned the top spot on the IO500 Production List, a semi-annual ranking of HPC storage performance.
+ + + +In another significant milestone for the supercomputer, early science teams began using Aurora for the first time in 2023. Several teams from the ALCF’s Aurora Early Science Program (ESP) and DOE’s Exascale Computing Project (ECP) were able to transition their work from the Sunspot test and development system to Aurora to start scaling and optimizing their applications for the supercomputer’s initial science campaigns. Their work has included performing scientifically meaningful calculations across a wide range of research areas.
+ +Once the early science period begins, the ECP and ESP teams will use the machine to carry out innovative research campaigns involving simulation, artificial intelligence, and data-intensive workloads in areas ranging from fusion energy science and cosmology to cancer research and aircraft design. In addition to pursuing groundbreaking research, these early users help to further stress test the supercomputer and identify potential bugs that need to be resolved ahead of its deployment.
+ +In 2024, an additional 24 research teams will begin using Aurora to ready their codes for the system via allocation awards from DOE’s INCITE program.
+ + +Looking beyond Aurora, the facility also kicked off the ALCF-4 effort to prepare for its next-generation supercomputer. In April 2023, DOE approved Critical Decision-0 (CD-0), which is the first step in procuring a new system.
+ +Led by Jini Ramprakash, ALCF-4 Project Director, and Kevin Harms, ALCF-4 Technical Director, the team is targeting 2028–2029 for the deployment of the facility’s next production supercomputer. The project’s goals include enabling a significant improvement in application performance over Aurora, continuing to support traditional HPC workloads alongside AI and data-intensive computations, and investigating the potential to accelerate the deployment and realization of new technologies.
+ +With Argonne’s Nexus effort, the ALCF continues to build off its long history of developing tools and capabilities to accelerate data-intensive science via an Integrated Research Infrastructure.
+ +When the massive upgrade at Argonne’s Advanced Photon Source (APS) is completed in 2024, experiments at the powerful X-ray light source are expected to generate 100–200 petabytes of scientific data per year. That’s a substantial increase over the approximately 5 petabytes that were being produced annually at the APS before the upgrade. When factoring in DOE’s four other light sources, the facilities are projected to collectively generate an exabyte of data per year in the coming decade.
+ +The growing deluge of scientific data is not unique to light sources. Telescopes, particle accelerators, fusion research facilities, remote sensors, and other scientific instruments also produce large amounts of data. And as their capabilities improve over time, the data generation rates will only continue to grow. The scientific community’s ability to process, analyze, store, and share these massive datasets is critical to gaining insights that will spark new discoveries.
+ +To help scientists manage the ever-increasing amount of scientific data, Argonne’s Nexus effort is playing a key role in supporting DOE’s vision to build an Integrated Research Infrastructure (IRI). The development of an IRI would accelerate data-intensive science by creating an environment that seamlessly melds large-scale research facilities with DOE’s world-class supercomputing, artificial intelligence (AI), and data resources.
+ + + +For over three decades, Argonne has been working to develop tools and methods to integrate its powerful computing resources with experiments. The ALCF’s IRI efforts include a number of successful collaborations that demonstrate the efficacy of combining its supercomputers with experiments for near real-time data analysis. Merging ALCF supercomputers with the APS has been a significant focus of the lab’s IRI-related research, but the work has also involved collaborations with facilities ranging from DIII-D National Fusion Facility in California to CERN’s Large Hadron Collider (LHC) in Switzerland.
+ +These collaborations have led to the creation of new capabilities for on-demand computing and managing complex workflows, giving the lab valuable experience to support the DOE IRI initiative. Argonne also operates several resources and services that are key to realizing the IRI vision.
+ +The IRI will not only enable experiments to analyze vast amounts of data, but it will also allow them to process large datasets quickly for rapid results. This is crucial as experiment-time analysis often plays a key role in shaping subsequent experiments.
+ + + +For the Argonne-DIII-D collaboration, researchers demonstrated how the close integration of ALCF supercomputers could benefit a fast-paced experimental setup. Their work centered on a fusion experiment that used a series of plasma pulses, or shots, to study the behavior of plasmas under controlled conditions. The shots were occurring every 20 minutes, but the data analysis required more than 20 minutes using their local computing resources, so the results were not available in time to inform the ensuing shot. DIII-D teamed up with the ALCF to explore how they could leverage supercomputers to speed up the analysis process.
+ +To help DIII-D researchers obtain results on a between-pulse timescale, the ALCF team automated and shifted the analysis step to ALCF systems, which computed the analysis of every single pulse and returned the results to the research team in a fraction of the time required by the computing resources locally available at DIII-D. Not only did the DIII-D team get the results in time to calibrate the next shot, they also got 16x higher resolution analyses that helped improve the accuracy of their experimental configuration.
+ +Many APS experiments, including battery research, the exploration of materials failure, and drug development, also need data analyzed in near real-time so scientists can modify their experiments as they are running. By getting immediate analysis results, researchers can use the insights to steer an experiment and zoom in on a particular area to see critical processes, such as the molecular changes that occur during a battery’s charge and discharge cycles, as they are happening.
+ +A fully realized IRI would also impact the people conducting the research. Scientists must often devote considerable time and effort to managing data when running an experiment. This includes tasks like storing, transferring, validating, and sharing data before it can be used to gain new insights. The IRI seeks to automate many of these tedious data management tasks so researchers can focus more on the science. This would help streamline the scientific process by freeing up scientists to form hypotheses while experiments are being carried out.
+ +Getting instant access to DOE supercomputers for data analysis requires a shift in how the computing facilities operate. Each facility has established policies and processes for gaining access to machines, setting up user accounts, managing data and other tasks. If a researcher is set up at one computing facility but needs to use supercomputers at the other facilities, they would have to go through a similar set of steps again for each site.
+ +Once a project is set up, researchers submit their jobs to a queue, where they wait their turn to run on the supercomputer. While the traditional queuing system helps optimize supercomputer usage at the facilities, it does not support the rapid turnaround times needed for the IRI.
+ +To make things easy for the end users, the IRI will require implementing a uniform way for experimental teams to gain quick access to the DOE supercomputing resources.
+ +To that end, Argonne has developed and demonstrated methods for overcoming both the user account and job scheduling challenges. The co-location of the APS and the ALCF on the Argonne campus has offered an ideal environment for testing and demonstrating such capabilities. When the ALCF launched the Polaris supercomputer in 2022, four of the system’s racks were dedicated to advancing the integration efforts with experimental facilities.
+ + + +In the case of user accounts, the existing process can get unwieldy for experiments involving several team members who need to use the computing facilities for data processing. Because many experiments have a team of people collecting data and running analysis jobs, it is important to devise a method that supports the experiment independent of who is operating the instruments on a particular day. In response to this challenge, the Argonne team has piloted the idea of employing “service accounts” that provide secure access to a particular experiment instead of requiring each team member to have an active account.
+ +To address the job scheduling issue, the Argonne team has set aside a portion of Polaris nodes to run with “on-demand” and “preemptable” queues. This approach allows time-sensitive jobs to run on the dedicated nodes immediately.
+ +Using data collected during an APS experiment, the team was able to complete their first fully automated end-to-end test of the service accounts and preemptable queues on Polaris with no humans in the loop. While work continues to enable these capabilities at more and more beamlines, this effort points to a future where the integration of the upgraded APS and the ALCF’s Aurora exascale supercomputer will transform science at Argonne and beyond.
+ +While Argonne and its fellow national labs have been working on projects to demonstrate the promise of an integrated research paradigm for the past several years, DOE’s Advanced Scientific Computing Research (ASCR) program made it a more formal initiative in 2020 with the creation of the IRI Task Force. Comprised of members from several national labs, including Argonne’s Corey Adams, Jini Ramprakash, Nicholas Schwarz, and Tom Uram, the task force identified the opportunities, risks, and challenges posed by such an integration.
+ + + +ASCR recently launched the IRI Blueprint Activity to create a framework for implementing the IRI. In 2023, the blueprint team, which included Ramprakash and Schwarz, released the IRI Archictecture Blueprint Activity Report, which describes a path forward from the lab’s individual partnerships and demonstrations to a broader long-term strategy that will work across the DOE ecosystem. Over the past year, the blueprint activities have started to formalize with the introduction of IRI testbed resources and environments. Now in place at each of the DOE computing facilities, the testbeds facilitate research to explore and refine IRI ideas in collaboration with teams from DOE experimental facilities.
+ + + +With the launch of Argonne’s Nexus effort, the lab will continue to leverage its expertise and resources to help DOE and the larger scientific community enable and scale this new paradigm across a diverse range of research areas, scientific instruments, and user facilities.
+ + +Theta’s retirement marks the end of a productive run of enabling groundbreaking research across diverse fields, including materials discovery, supernova simulations, and AI for science.
+ +After more than six years of enabling breakthroughs in scientific computing, the ALCF’s Theta supercomputer was retired at the end of 2023. Launched in July 2017, the machine delivered 202 million compute hours to more than 600 projects, propelling advances in areas ranging from battery research to fusion energy science.
+ +Theta was a pivotal system for science at Argonne and beyond. Not only did Theta deliver on the ALCF’s mission to enable large-scale computational science campaigns, but it was also the supercomputer that continued the ALCF’s transformation into a user facility that supports machine learning and data science methods alongside more traditional modeling and simulation projects.
+ +Theta’s run as an Argonne supercomputer coincided with the emergence of AI as a critical tool for science. The system provided researchers with a platform that could handle a mix of simulation, AI, and data analysis tasks, catalyzing groundbreaking studies across diverse scientific domains.
+ +Around the same time that Theta made its debut in 2017, the facility launched the ALCF Data Science Program (ADSP) to support HPC projects that were employing machine learning and other AI methods to tackle big data challenges. This initiative gave the facility’s data science and learning capabilities a boost while also building up a new community of users.
+ +Theta is succeeded by Polaris and the Aurora exascale system as the lab’s primary supercomputers for open scientific research. Theta’s Intel architecture and its expansion to include NVIDIA GPUs have played a key role in helping the facility and its user community transition to Polaris’s hybrid architecture and Aurora’s cutting-edge Intel exascale hardware. Theta’s MCDRAM mode, for example, helped pave the way to Aurora’s high-bandwidth memory capabilities.
+ + + +Funded by the Coronavirus Aid, Relief and Economic Security (CARES) Act in 2020, the system’s GPU hardware expansion, known as ThetaGPU, was initially dedicated to COVID-19 research. The GPU-powered component was later made available to all research projects. After Theta’s retirement, the ThetaGPU hardware was repurposed to create a new machine called Sophia for specialized tasks, including a major focus on supporting AI for science.
+ +Beyond its powerful hardware, the system’s legacy will be the research breakthroughs it enabled over the years. From detailed molecular simulations to massive cosmological models, Theta supported hundreds of computationally intensive research projects that are only possible at a supercomputing facility like the ALCF.
+ +Theta allowed researchers to perform some of the world’s largest simulations of engines and supernovae. The system powered efforts to model the spread of COVID-19 and assess the energy use of the nation’s buildings. It enabled AI-driven research to accelerate the search for new catalysts and promising drug candidates. Theta also gave industry R&D a boost, helping TAE Technologies inform the design of its fusion energy devices, advancing 3M’s efforts to improve the energy efficiency of a manufacturing process, and generating data to aid ComEd in preparing for the potential impacts of climate change. The list of impactful science projects goes on and on.
+ +One of the pioneering machine learning projects was led by Jacqueline Cole of the University of Cambridge. With support from the ADSP, her team used Theta to speed up the process of identifying new materials for improved solar cells. It began with an effort to sort through hundreds of thousands of scientific journals to collect data on a wide variety of chemical compounds. The team created an automated workflow that combined simulation, data mining, and machine learning techniques to zero in on the most promising candidates from a pool of nearly 10,000 compounds. This allowed the researchers to pinpoint five high-performing materials for laboratory testing.
+ +Simulating supernova explosions is another area of research that benefitted from Theta’s computational muscle. As part of a multi-year project, Adam Burrows of Princeton University used the supercomputer to advance the state of the art in performing supernova simulations in 3D. The team’s work on Theta has included carrying out one of the largest collections of 3D supernova simulations and the longest duration full-physics 3D supernova calculation ever performed. With Theta now retired, the Princeton team continues their work to carry out longer and more detailed 3D supernova simulations on Polaris and Aurora.
+ +While Theta will retire from its full-time role at the end of the year, the system will support one last research campaign in 2024 before it’s officially powered down. As part of a collaboration between the DOE-supported LSST Dark Energy Science Collaboration and NASA-supported researchers, a multi-institutional team will use Theta to produce 3 million simulated images for the surveys to be conducted by the Nancy Grace Roman Space Telescope and the Vera C. Rubin Observatory. The team will generate a set of overlapping Roman-Rubin time domain surveys at the individual pixel level. These detailed images will enable the exploration of highly impactful joint science opportunities between the two surveys, especially for dark energy studies.
+ + +Intro text the quick brown fox jumps over the lazy dog. The quick brown fox jumps over the lazy dog. The quick fox jumps over the lazy dog.
+ +Paragraph text, the quick brown fox jumps over the lazy dog. The quick brown fox jumps over the lazy dog. The quick brown fox jumps over the lazy dog. Includes italic text, bold text, highlight, strikethrough, and links.
Code block
+var example = example;
+
Example | +Table | +Table subhead | +Table subhead | +
---|---|---|---|
Value 1a | +Value 1b | +Value 1c | +Value 1d | +
Value 2a | +Value 2b | +Value 2c | +Value 2d | +
Value 3a | +Value 3b | +Value 3c | +Value 3d | +
The quick brown fox jumps over the lazy dog. The quick brown fox jumps over the lazy dog. The quick brown fox jumps over the lazy dog. The quick brown fox jumps over the lazy dog. The quick brown fox jumps over the lazy dog. The quick brown fox jumps over the lazy dog.
+ + + + + +The quick brown fox jumps over the lazy dog. The quick brown fox jumps over the lazy dog. The quick brown fox jumps over the lazy dog. The quick brown fox jumps over the lazy dog. The quick brown fox jumps over the lazy dog.
+ +The ALCF is accelerating scientific discoveries in many disciplines, ranging from physics and materials science to biology and engineering.
+ +As a national user facility dedicated to open science, any researcher in the world with a large-scale computing problem can apply for time on ALCF computing resources.
+ +Researchers gain access to ALCF systems for computational science and engineering projects through competitive, peer-reviewed allocation programs supported by the DOE and Argonne.
+ +The ALCF also hosts competitive, peer-reviewed application programs designed to prepare key scientific applications and innovative computational methods for the architecture and scale of DOE supercomputers.
+ +The Innovative Novel Computational Impact on Theory and Experiment (INCITE) program aims to accelerate scientific discoveries and technological innovations by awarding ALCF computing time and resources to large-scale, computationally intensive projects that address grand challenges in science and engineering.
+ +The ASCR Leadership Computing Challenge (ALCC) program allocates ALCF computing resources to projects that advance the DOE mission; help to broaden the community of researchers capable of using leadership computing resources; and serve the national interests for scientific discovery, technological innovation, and economic competitiveness.
+ +Director’s Discretionary projects are dedicated to leadership computing preparation, INCITE and ALCC scaling, and efforts to maximize scientific application efficiency and productivity on leadership computing platforms.
+ +As part of the process of bringing a new supercomputer into production, the ALCF conducts its Early Science Program (ESP) to prepare applications for the architecture and scale of a new system. ESP projects represent a typical system workload at the ALCF and cover key scientific areas and numerical methods.
+ + +Domain | ++ | INCITE | +ALCC | +
---|---|---|---|
A. Biological Sciences | ++ | 8 % | +4 % | +
B. Chemistry | ++ | 10 | +24 | +
C. Computer Science | ++ | — | +1 | +
D. Earth Science | ++ | 8 | +7 | +
E. Energy Technologies | ++ | - | +20 | +
F. Engineering | ++ | 14 | +5 | +
G. Materials Science | ++ | 15 | +13 | +
H. Physics | ++ | 45 | +26 | +
ALCC data are from calendar year 2023.
+ +In 2023, scientists from across the world used ALCF supercomputing and AI resources to accelerate discovery and innovation across a wide range of research areas. The following science highlights detail some of the groundbreaking research campaigns carried out by ALCF users over the past year.
+ ++ Energy Technologies | + + + Simulation + +
+ +With a push towards decarbonizing the aviation sector, sustainable aviation fuels (SAFs) have gained prominence as a potential replacement for fossil fuels. This project is developing the capabilities to perform fully-resolved simulations of modern gas turbine combustors to enable improved understanding of the multiphysics processes in the context of advancing the development of SAFs.
+ +To assess the viability of various SAFs, researchers must be able to understand and predict the complex flow, spray, and combustion processes taking place in the gas turbine combustors, and their influence on events such as lean blow out, high altitude relight and cold start, that affect the performance of gas turbines. With recent advances in numerical methods and the availability of HPC resources, computer simulations can provide unprecedented details of the underlying multiphysics processes, but they rely on the complex task of creating a detailed computational model of the gas turbine that is accurate and runs efficiently on modern computers.
+ +The objective of this research is to develop capabilities to perform fully-resolved simulations of modern gas turbine combustors using Nek5000 to enable improved understanding of the multiphysics processes in the context of advancing the development of sustainable aviation fuels. Nek5000 is a high-order spectral element method (SEM) based code, developed at Argonne, targeted towards exascale systems. Proper orthogonal decomposition of the turbulent flow field were performed to investigate the dynamics of the large- and small-scale turbulence in the combustor. Finally, simulations with fuel injection were used to determine the effect of fuel spray on the turbulent flow structures.
+ +In this project, the team performed the first-ever wall-resolved large eddy simulations of the turbulent flow and spray processes in the Army Research Laboratory’s ARC-M1 research combustor. The simulations were validated using particle image velocimetry measurements from a group at the University of Illinois at Urbana-Champaign, and showed good agreement. The simulations demonstrated the presence of large and small recirculation regions generated due to mixing between the different flow streams. The accurate prediction of these recirculation regions is key in predicting the flame anchoring and dynamics for reacting simulations.
+ +These high-fidelity simulations that leverage the DOE supercomputers can help researchers understand the combustion and heat transfer challenges introduced by using low-carbon sustainable aviation fuels. This project will help establish a high-fidelity, scalable, numerical framework that can be used for evaluating the effect of fuel properties on flow and flame dynamics in a practical gas turbine combustor.
+ ++ Engineering | + + + Simulation + +
+ +Hypersonic flight, the ability to fly at more than five times the speed of sound, has the potential to revolutionize technologies for national security, aviation, and space exploration. However, a fundamental understanding of the aerothermodynamics of hypersonic flight is needed to enable technological advances in this field. A research team from the University of Dayton Research Institute and Air Force Research Laboratory is using ALCF supercomputers to shed light on the complex thermal environment that hypersonic vehicles encounter.
+ +Strong shockwaves formed during hypersonic flight can cause the excitation of internal energy modes, and chemical reactivity in the shock heated gas. The rate processes for these phenomena competes with the local flow time, causing the flow to be in thermal and chemical nonequilibrium. Proper characterization of this state is important for designing the required thermal protection systems for hypersonic vehicles. A key challenge is to ensure that reduced-order models used in computational fluid dynamics codes can capture the strong coupling between fluid mechanics of the gas flow, gas-phase thermochemistry, and transport properties at high temperatures. Traditionally, these physics have been investigated separately by producing simplified models that tend to reproduce only certain aspects of high-speed, reacting flows.
+ +With this INCITE project, the team is running a custom version of the SPARTA Direct Simulation Monte Carlo (DSMC) code on ALCF computing resources to carry out direct molecular simulations (DMS) of hypersonic experiments. Their goal is to conduct simulations that rely solely on molecular-level interactions modeled using quantum mechanics, providing a fundamental comparison with experiments, and well-characterized solutions that can be used as benchmarks for reduced-order models.
+ +In a new study published in the Journal of Fluid Mechanics, the team detailed a large-scale, fully resolved DSMC computation of a non-equilibrium, reactive flow of pure oxygen over a double cone (a canonical hypersonic test case). The researchers used their highly accurate DMS method to obtain first-principles data to inform the parameters of the thermochemical and transport collision models. Their computations show good agreement with heat flux and pressure measured on the test article during the experiment. The computation also provided molecular-level insights such as the nonequilibrium distribution of energy in the kinetic and vibrational modes in the shock layer. The team’s results show the importance of particle methods in verifying physical assumptions made by reduced-order models.
+ +The team’s research is helping to advance our understanding of the complex aerothermodynamics of hypersonic flight, providing insights that could help inform the design of safer and efficient technologies for space travel and defense.
+ ++ Materials Science | + + + Simulation + +
+ +Proton beam therapy, a promising alternative to conventional X-rays for cancer treatment, relies on understanding the radiation-induced response of DNA. This knowledge not only enhances the treatment by allowing for more precise tumor targeting that minimizes damage to healthy cells, but also holds significance for space missions, where exposure to high-energy protons is a concern for astronauts due to limited data on bodily effects. To help advance our understanding of this complex process, researchers from the University of North Carolina at Chapel Hill are using ALCF supercomputers to study the quantum mechanics involved in the transfer of energy from high-energy protons to DNA.
+ +The lack of molecular-level understanding for the electronic excitation response of DNA to charged particle radiation, such as high-energy protons, remains a fundamental scientific bottleneck in advancing proton and other ion-beam cancer therapies. Specifically, the relationship between high-energy protons and various types of DNA damage poses a significant knowledge void. The ultrafast nature of the excitations makes experimental investigation difficult. However, employing quantum mechanical methods and non-equilibrium simulations can provide valuable insights into the intricate energy transfer process of high-energy protons damaging DNA.
+ +In a recent study, the University of North Carolina at Chapel team leveraged the ALCF’s Theta supercomputer to carry out first-principles real-time time-dependent density functional theory simulations to unravel the quantum mechanical details of the energy transfer from high-energy protons to DNA in water. The researchers used the Qb@ll version of the Qbox code for the simulations, which included 3,991 atoms and 11,172 electrons, with six different proton kinetic energies sampled for each proton path. Two proton paths were considered: one directly through center of DNA and another along a sugar-phosphate side chain. By including explicit water molecules in their simulations, the team was able to get a more accurate picture of the DNA excitations in the initial radiation process over the first few femtoseconds. ALCF staff worked with the researchers to aid in employing optimized libraries and to resolve compiling issues.
+ +The team’s calculations revealed that high-energy protons transfer significantly more energy to sugar-phosphate side chains than the nucleobases of DNA, and that greater energy transfer is expected onto the DNA side chains than onto water. The researchers determined the stopping power magnitude for side path was more than three times larger at the peak and at least twice as large at all velocities. As a result of the electronic stopping process, highly energetic holes are generated on the DNA side chains as a source of oxidative damage. The stopping power was found to largely depend on the energetics of holes generated. Results from these detailed simulations help to fill the knowledge void in understanding detailed mechanisms for extensive DNA strand break lesions observed with a proton beam, and will help inform the development of increasingly more sophisticated multiscale medical physics models.
+ +The team’s research into the radiation-induced response of DNA has important implications for human health. Their insights will help researchers develop a more complete understanding of the initial excitation response in proton beam cancer therapy and add to the growing knowledge base for advanced multiscale models in medical physics. The team’s findings will also help determine how exposure to radiation, such as cosmic rays in space, can lead to potential health risks due to DNA damage.
+ ++ Materials Science | + + + ; Simulation, Learning + +
+ +Improving our ability to understand and predict the behavior and properties of molecules and materials is crucial to enabling the design and discovery of new materials for batteries, catalysts, semiconductors, and countless other applications. With this INCITE project, a multi- institutional team is advancing the use of quantum Monte Carlo (QMC) methods, coupled with machine learning, to provide accurate and reliable predictions of the fundamental properties of a wide range of molecules and materials.
+ +The predictive accuracy of quantum machine learning (QML) models trained on quantum chemistry data and used for the navigation of the chemical compound space is inherently limited by the predictive accuracy of the approximations used within the underlying quantum theory. To help QML models achieve the coveted threshold of chemical accuracy (~1 kcal/mol average deviation of calculated values from experimental measurements of atomization energies), the INCITE team is leveraging DOE supercomputers to demonstrate the usefulness of recently implemented and numerically efficient QMC methods for generating highly accurate training data.
+ +The team’s primary application is QMCPACK, an open-source code for computing the electronic structure of atoms, molecules, 2D nanomaterials, and solids. As part of a recent study, the researchers used the ALCF’s Theta supercomputer to couple QMCPACK with Δ-QML-based surrogate methods to predict the energetics of large molecules at chemical accuracy and at a fraction of the computational cost of traditional machine learning methods.
+ +In a paper published in the Journal of Chemical Theory and Computation, the team showed that their Δ-QML framework can alleviate the computational burden of QMC such that it offers clear potential to support the formation of high-quality descriptions across the chemical space. Their work involved using Theta to conduct diffusion Monte Carlo (DMC) calculations on over 1,000 small amons containing up to five heavy atoms and covering parts of the QM9 database, which is used routinely for machine learning predictions of various chemical properties. This is the largest dataset ever computed with DMC and the first use of such a dataset for machine learning. The team’s research suggests that the QMC training datasets of amons can predict total energies with near chemical accuracy throughout chemical space, setting the foundation for the study of larger databases.
+ +Using the Δ-QML approach, the team was able to predict the energetics of large molecules at a reduced computational cost while maintaining chemical accuracy. The high efficiency of the Δ-QML framework compared to traditional approaches indicates a path to use the computationally expensive but highly accurate QMC methodology in machine learning. This new method will allow researchers to study larger systems and predict the properties of molecules and materials more accurately, which could lead to significant advances in fields such as materials science, drug discovery, and energy research.
+ + ++ Biological Sciences | + + + Simulation + +
+ +Proteins play an essential role in nearly all biological processes. By investigating proteins and their functions, scientists are providing insights to drive drug development, further our understanding of disease mechanisms, and advance many other areas of biomedical research. With help from ALCF supercomputers, a team from University of Illinois Chicago (UIC) has made an important breakthrough in understanding how proteins function.
+ +The primary goal of protein science is to understand how proteins function, which requires understanding the dynamics responsible for transitions between different functional structures of a protein. If the exact reaction coordinates (the small number of essential coordinates that control functional dynamics) were known, researchers could determine the transition rate for any protein configuration and thoroughly understand its mechanism. Despite intensive efforts, identifying the exact reaction coordinates in complex molecules remains a formidable challenge.
+ +The UIC team employed their generalized work functional (GWF) method to study the flap opening process of HIV-1 protease, a complex protein and major drug target for combatting the HIV virus. GWF is a fundamental mechanical quantity rooted in Newton’s law. Using the transition path sampling method, the researchers leveraged the ALCF’s Theta supercomputer to generate 2,000 reactive trajectories that start from structures of HIV-1 protease with flaps in the semi-open state and end at structures with flaps in the open state. This data served as the input to the GWF method, which was used to pinpoint the exact reaction coordinates and determine the molecular mechanism of the flap opening process.
+ +As detailed in their paper in the Proceedings of the National Academy of Sciences, the team was able to identify the exact reaction coordinates for a major conformational change of a large functional protein for the first time. Their results show that the flap opening of HIV-1 protease has six reaction coordinates, providing the precise definition of collectivity and cooperativity in the functional dynamics of a protein. Success in determining the reaction coordinates enabled acceleration of this important process by 103 to 104 folds compared to regular molecular dynamics simulations. The team’s work demonstrates that the GWF method could potentially be applied to other problems in protein research, such as folding, entropic barriers, and reaction rates.
+ +By successfully identifying the exact reaction coordinates for a complex protein for the first time, the team has made an important breakthrough toward understanding protein functional dynamics. Their work has far-reaching implications for both biomedical research and protein engineering, providing insights that are crucial for designing drugs, fighting drug resistance, and developing artificial enzymes that can complete desired functions.
+ ++ Materials Sciences | + + + ; Simulation, Learning + +
+ +This project aims to boost scalable manufacturing of quantum materials and ultrafast control of their emergent properties on demand using AI-guided exascale quantum dynamics simulations. Neural-network quantum molecular dynamics (NNQMD) simulations based on machine learning are revolutionizing atomistic simulations of materials by providing quantum-mechanical accuracy at speeds orders of magnitude faster than is possible with traditional methods, but face challenges in scaling properly on massively parallel systems.
+ +Despite its remarkable computational scalability, massively parallel NNQMD simulations face a major unsolved issue known as fidelity scaling. In such cases, small prediction errors can propagate and lead to unphysical atomic forces that degrade the accuracy of atomic trajectory over time. These force outliers can even cause the simulation to terminate unexpectedly. As simulations become spatially larger and temporally longer, the number of unphysical force predictions is expected to scale proportionally, which could severely limit NNQMD fidelity on new exascale supercomputing platforms, especially for the most exciting far-from-equilibrium applications.
+ +To solve the fidelity-scaling issue, the researchers implemented the Allegro–Legato model in its NNQMD code, RXMD-NN, which was deployed on the ALCF’s Polaris supercomputer. The model was trained using sharpness- aware minimization to regularize its sharpness along with its training loss and thereby enhance its robustness.
+ +As shown in an ISC High Performance 2023 paper, the implemented Allegro–Legato model increases time-to-failure while maintaining the same inference speed and nearly equal accuracy. Specifically, time-to-failure in Allegro–Legato is less dependent on problem size, thus allowing larger-scale and longer-duration NNQMD simulations without failure. Additionally, the researchers demonstrate that the fidelity-scalability of the NNQMD model correlates with sharpness of the model more than the number of parameters in the model.
+ +This work, directly validated by x-ray free electron laser, ultrafast electron diffraction, and neutron experiments at DOE facilities, will enable future production of high-quality custom quantum material architectures for broad and critical applications for continued U.S. leadership in technology development, including that for sustainable ammonia, thereby addressing DOE basic research needs for transformative manufacturing and quantum materials. The Allegro–Legato model exhibits excellent computational scalability and GPU acceleration in carrying out NNQMD simulations, with strong promise for emerging exascale systems.
+ ++ Biological Sciences | + + + Simulation + +
+ +The COVID-19 pandemic has had far-reaching health repercussions worldwide. One notable impact has been a sharp decline in cancer screening rates, including for colorectal cancer (CRC), which remains the second-leading cause of cancer deaths in the United States. To investigate the effects of these screening disruptions, a multi-institutional team of researchers leveraged ALCF supercomputers to run CRC models to estimate their impact on long-term cancer outcomes.
+ +Despite cancer screening reopening efforts, CRC screening has not yet returned to pre-pandemic levels. The pandemic continues to affect CRC screening and diagnosis through staff shortages that reduce capacity at gastroenterology clinics and patient hesitancy to seek care. The pandemic may also further exacerbate existing disparities related to screening. The burden of unemployment and loss of access to healthcare varies across different racial and ethnic groups, which could contribute to widening disparities in cancer outcomes.
+ +With help from ALCF computing resources, a team of researchers from Argonne National Laboratory, RAND Corporation, Erasmus Medical Center, Fred Hutchinson Cancer Center, and Memorial Sloan Kettering Cancer Center used two independently developed microsimulations CRC models — CRC-SPIN and MISCAN-Colon — to estimate the effects of pandemic-induced disruptions in colonoscopy screening for eight pre-pandemic average-CRC risk population cohorts. The team leveraged the ALCF’s Theta supercomputer to calibrate the CRC-SPIN model using the Incremental Mixture Approximate Bayesian Computation (IMABC) method. Each Theta node could run 64 concurrent CRC-SPIN models, with jobs consisting of large, space-filling parameter samples and longer iterative parameter space sampling. The researchers evaluated three channels through which screening was disrupted: delays in screening, regimen switching, and screening discontinuation. The impact of these disruptions on long-term CRC outcomes was measured by the number of life-years lost due to CRC screening disruptions compared to a scenario without any disruptions.
+ +The team examined a total of 25 scenarios based on different population cohorts (e.g., 50-, 60-, and 70-year-olds who did or did not adhere to screening) that post-pandemic experienced no disruptions, some delays, or discontinued screening. While short-term delays in screening of 3-18 months are predicted to result in minor decreases in life expectancy, discontinuing screening resulted in much more significant decreases. The team’s findings demonstrate that unequal recovery of screening following the pandemic can further widen disparities. The worst-case scenario considered was that of 50-year-olds who postponed screening until the age of 65 when they became Medicare eligible, whereas other disruption scenarios for this group are predicted to have minor effects.
+ +The team’s research highlights the potential harm caused by disruptions in cancer screening due to the COVID-19 pandemic. By analyzing different age groups and screening statuses, their study underscores how discontinuing screening could reduce life expectancy, emphasizing the importance of ensuring equitable recovery to screening to prevent further disparities.
+ ++ Biological Sciences | + + + Simulation + +
+ +Cardiovascular disease, including heart attack and stroke, is the leading cause of death in the United States. In this project, simulations of blood flow with deformable red blood cells were performed for the first time in a patient- specific retina vascular network examining the impact of blockages on flow rate and cell transport dynamics.
+ +Modeling capillary flow accurately is challenging due to the complex structure with various vessel branches and loops, and moving cell suspensions whose size is comparable to vessel diameters. Large three-dimensional (3D) vascular networks, such as this, are typically represented by simplified one-dimensional (1D) models at a much lower computational cost; however, these reduced order models may not accurately describe the flow dynamics.
+ +Flow dynamics in a patient-specific retina capillary network were simulated through coupling of a lattice Boltzmann method (LBM) based fluid solver with particle-based cell membrane models using the immersed boundary method (IBM). The geometry of the retina network was obtained from the National Institutes Health 3D print database. The red and white blood cells were modeled as thin membranes using a particle-based method implemented in LAMMPS. Collaborating with the ALCF Visualization and Data Analytics team, the team used Cooley to develop scientific visualizations of their blood flow simulations.
+ +From the 3D simulations, it was found that cells in blood act as moderators of flow. The flow of blood was redistributed from high flow rate regions near the inlet to the distant vessels with lower flow rates. Cell splitting behavior at bifurcations was found to be complex, which depends on many factors such as flow rates, pressure differences, or geometric parameters of the daughter branches. From 1D simulations, the steady state flow rate through the network was obtained 1) without any blockages and 2) for blockages in various vessels to assess the severity (i.e., change in flow velocity) and impact in different parts of the network. Several potential improvements to the 3D model were noted as well as the need for efficient post-analysis and visualization tools to enable in-situ visualization and analysis considering the large volume of data generated.
+ +Inclusion of larger white blood cells was found to significantly increase the transit time of red blood cells through vessels. The simulation of flow under partial vessel blockage (e.g., stenosis) with cells showed that cells could oscillate and be trapped in an adjacent vessel due to the fluctuating flow. The best performing 1D reduced order model still resulted in large errors in both the number of red blood cells and flow rate for short vessels, and such models may be more suitable for networks with larger vessels.
+ ++ Physics | + + + Simulation + +
+ +Accretion flows around supermassive black holes at the centers of galaxies emit electromagnetic radiation that is critical to understanding these active galactic nuclei, which influence galactic evolution. Interpreting observed radiation, however, requires detailed modeling of the complex multiscale plasma processes in accretion flows. Using petascale 3D particle-in-cell (PIC) simulations,this project investigates electron versus ion energization, nonthermal particle acceleration, and self-consistent synchrotron radiation for plasma processes likely ubiquitous in black-hole accretion, including plasma turbulence driven by the magnetorotational instability (MRI) or other forces, and collisionless magnetic reconnection.
+ +The team has identified three key links in the chain of plasma processes that lead from gravitational attraction of matter around a black hole to accretion and radiation. The development of the MRI leads to outward angular momentum transport that allows accretion; it also generates turbulence and current sheets leading to magnetic reconnection, both of which result in particle energization and hence radiation.
+ +To perform simulations, the researchers deployed the Zeltron application on ALCF supercomputers. Zeltron models relativistic, radiating, and rotating astrophysical plasmas from first principles using an explicit finite-difference time-domain, radiative electromagnetic PIC code. Zeltron can include the radiation reaction force (due to synchrotron and inverse Compton emission) in the particles’ equations of motion, and simulate shearing box boundary conditions appropriate for studying MRI in black hole accretion disks.
+ +As detailed in a paper published in The Astrophysical Journal, the researchers explored nonlinear development of MRI turbulence in a pair plasma, employing fully kinetic PIC simulations in two and three dimensions carried out on Theta. This included studying the axisymmetric MRI with 2D simulations, explaining how and why the 2D geometry produces results that differ substantially from 3D MHD expectations; and then performing the largest such 3D simulations carried out to date, for which the team employed a novel shearing-box approach, demonstrating that 3D PIC models can reproduce mesoscale MRI dynamics in sufficiently large runs. Using the fully kinetic simulations, the team was able to describe the nonthermal particle acceleration and angular-momentum transport driven by the collisionless MRI.
+ +The work takes a critical step toward understanding the behavior of black holes in the universe. The simulations of plasma processes and energy conversion mechanisms in black hole accretion flows will be used to inform global magnetohydrodynamics computational and theoretical modeling, thus accounting for kinetic processes to predict radiation output and enable comparison to observations. Moreover, these simulations have the potential to significantly advance computational plasma physics.
+ ++ Energy Technologies | + + + Simulation + +
+ +Westinghouse Electric Company is working with an international team to develop its next-generation high-capacity nuclear power plant based on lead-cooled fast reactor technology. Using ALCF supercomputers, researchers from Argonne National Laboratory are collaborating with the company to provide insights into the reactor’s flow physics and heat transfer mechanisms.
+ +Lead-cooled fast reactors are a type of nuclear reactor design that offer many advantages, including the ability to operate at higher thermal efficiencies than existing commercial light water reactors. Developing these advanced reactors poses challenges due to the unique characteristics of heavy liquid metal (HLM) coolants, such as a low Prandtl number (Pr) compared to water. Existing turbulence models are inadequate for accurately predicting heat transfer in HLM flows, making the selection of an appropriate turbulent Prandtl number (Prt) critical. Accurate modeling and simulation of heat transfer and mixing in the HLM coolant is needed to help prepare the technology for licensing.
+ +For this effort, the team performed large eddy simulations (LES) using the open-source Nek5000 code on ALCF’s Theta system to study nuclear fuel rod bundles with HLM flows. LES do not require a Prt to model turbulence-driven heat transfer, and thus can be used as benchmarks for selecting a Prt in a less computationally expensive Reynolds Averaged Navier–Stokes (RANS) model which requires this parameter.
+ +In a paper published in Nuclear Engineering and Design, the researchers showed that the selection of the appropriate Prt significantly impacts the accuracy of simulations for advanced nuclear reactors. By analyzing a prototypical lead-cooled fast reactor assembly with different Prt values, the team found that inappropriate Prtcan introduce error in Nusselt number (a measure of heat transfer) by up to 44 percent. They also compared detailed temperature distributions obtained by computationally expensive LES and less expensive RANS simulations to better understand the deviation introduced by the turbulence model. The analysis shows that the RANS model with Prt=1.5 shows the best agreement with LES on the prediction of local temperature distribution and global Nusselt number.
+ +The team’s research is helping to enhance the understanding and modeling of heavy liquid metal flow behavior and heat transfer mechanisms for next-generation nuclear reactors. In addition, their study provides valuable high-fidelity reference data that can be used by the nuclear reactor research community to validate and calibrate less computationally expensive models.
+ +PI: Jie Liang, University of Illinois at Chicago
+HOURS: ALCF: 1,625,000 Node-Hours
PI: Philippe Sautet, University of California Los Angeles
+HOURS: ALCF: 2,200,000 Node-Hours
PI: Rao Kotamarthi, Argonne National Laboratory
+HOURS: ALCF: 1,600,000 Node-Hours
PI: Myoungkyu Lee, University of Alabama
+HOURS: ALCF: 700,000 Node-Hours
PI: Maninder Grover, University of Dayton Research Institute
+HOURS: ALCF: 1,650,000 Node-Hours
PI: Lian Duan, The Ohio State University
+HOURS: ALCF: 500,000 Node-Hours
PI: Aiichiro Nakano, University of Southern California
+HOURS: ALCF: 200,000 Node-Hours
PI: Choongseok Chang, Princeton Plasma Physics Laboratory
+HOURS: ALCF: 300,000 Node-Hours, OLCF: 2,500,000 Node-Hours
PI: Paul Kent, Oak Ridge National Laboratory
+HOURS: ALCF: 100,000 Node-Hours, OLCF: 1,000,000 Node-Hours
PI: Andre Schleife, University of Illinois at Urbana-Champaign
+HOURS: ALCF: 1,000,000 Node-Hours
PI: Chris Wolverton, Northwestern University
+HOURS: ALCF: 1,800,000 Node-Hours
PI: Gaute Hagen, Oak Ridge National Laboratory
+HOURS: ALCF: 2,500,000 Node-Hours, OLCF: 1,590,000 Node-Hours
PI: Peter Coveney, University College London
+HOURS: ALCF: 75,000 Node-Hours, OLCF: 520,000 Node-Hours
PI: Dmitri Uzdensky, University of Colorado
+HOURS: ALCF: 2,600,000 Node-Hours
PI: Michael Zingale, Stony Brook University
+HOURS: ALCF: 100,000 Node-Hours, OLCF: 700,000 Node-Hours
PI: James Stone, Institute for Advanced Study
+HOURS: ALCF: 110,000 Node-Hours, OLCF: 1,000,000 Node-Hours
PI: Paulo Alves, University of California Los Angeles
+HOURS: ALCF: 200,000 Node-Hours
PI: Jonathan Ozik, Argonne National Laboratory
+HOURS: ALCF: 160,000 Node-Hours, NERSC: 100,000 Node-Hours
PI: Ravi Madduri, Argonne National Laboratory
+HOURS: ALCF: 210,000 Node-Hours
PI: Wei Jiang, Argonne National Laboratory
+HOURS: ALCF: 710,000 Node-Hours
PI: Eugene DePrince, Florida State University
+HOURS: ALCF: 700,000 Node-Hours
PI: Dillon Shaver, Argonne National Laboratory
+HOURS: ALCF: 500,000 Node-Hours
PI: Ivan Oleynik , University of South Florida
+HOURS: ALCF: 500,000 Node-Hours, OLCF: 1,500,000 Node-Hours
PI: Yiqi Yu, Argonne National Laboratory
+HOURS: ALCF: 510,000 Node-Hours
PI: Igor Bolotnov, North Carolina State University
+HOURS: ALCF: 200,000 Node-Hours, NERSC: 300,000 Node-Hours
PI: Feliciano Giustino , The University of Texas at Austin
+HOURS: ALCF: 100,000 Node-Hours
PI: Giulia Galli , University of Chicago
+HOURS: ALCF: 600,000 Node-Hours, NERSC: 400,000 Node-Hours
PI: Aidan Thompson, Sandia National Laboratories
+HOURS: ALCF: 850,000 Node-Hours, OLCF: 500,000 Node-Hours, NERSC: 250,000 Node-Hours
PI: Christopher Kelly, Brookhaven National Laboratory
+HOURS: ALCF: 135,000 Node-Hours
PI: Frederico Fiuza, SLAC National Accelerator Laboratory
+HOURS: ALCF: 300,000 Node-Hours, NERSC: 150,000 Node-Hours
PI: Thomas Blum, University of Connecticut
+HOURS: ALCF: 5,000 Node-Hours, OLCF: 3,283,000 Node-Hours
PI: Dirk Hufnagel , Fermi National Accelerator Laboratory
+HOURS: ALCF: 70,000 Node-Hours
PI: Jonathan Ozik, Argonne National Laboratory
+HOURS: ALCF: 283,000 Node-Hours
PI: George Karniadakis, Brown University
+HOURS: ALCF: 50,000 Node-Hours, OLCF: 40,000 Node-Hours, NERSC: 60,000 Node-Hours
PI: Wei Jiang, Argonne National Laboratory
+HOURS: ALCF: 500,000 Node-Hours
PI: Paul Ullrich, University of California
+HOURS: ALCF: 900,000 Node-Hours, NERSC: 300,000 Node-Hours
PI: Dillon Shaver, Argonne National Laboratory
+HOURS: ALCF: 400,000 Node-Hours, OLCF: 400,000 Node-Hours, NERSC: 100,000 Node-Hours
PI: Yiqi Yu, Argonne National Laboratory
+HOURS: ALCF: 500,000 Node-Hours
PI: Sara Pryor, Cornell University
+HOURS: ALCF: 142,000 Node-Hours
PI: Ivan Oleynik, University of South Florida
+HOURS: ALCF: 150,000 Node-Hours
PI: Emilian Popov, Oak Ridge National Laboratory
+HOURS: ALCF: 224,000 Node-Hours
PI: Feliciano Giustino, University of Texas
+HOURS: ALCF: 883,000 Node-Hours
PI: Zarija Lukić, Lawrence Berkeley National Laboratory
+HOURS: ALCF: 100,000 Node-Hours, OLCF: 50,000 Node-Hours, NERSC: 50,000 Node-Hours
PI: Frederico Fiuza, SLAC National Accelerator Laboratory
+HOURS: ALCF: 860,000 Node-Hours
PI: Steven Gottlieb, Indiana University
+HOURS: ALCF: 100,000 Node-Hours, OLCF: 1,000,000 Node-Hours, NERSC: 100,000 Node-Hours
PI: Jaeyoung Park, TAE Technologies, Inc.
+HOURS: ALCF: 400,000 Node-Hours
PI: Noemi Rocco, Fermi National Accelerator Laboratory/ +HOURS: ALCF: 730,000 Node-Hours
+ +PI: Robert Edwards, Jefferson Laboratory/ +HOURS: ALCF: 300,000 Node-Hours
+ +PI: William Tang, Princeton Plasma Physics Laboratory
+ +PI: Salman Habib, Argonne National Laboratory
+ +PI: Kenneth Jansen, University of Colorado Boulder
+ +PI: Nicola Ferrier, Argonne National Laboratory
+ +PI: David Bross, Argonne National Laboratory
+ +PI: Anouar Benali, Argonne National Laboratory
+ +PI: Katrin Heitmann, Argonne National Laboratory
+ +PI: Amanda Randles, Duke University
+ +PI: Kenneth Jansen, University of Colorado Boulder
+ +PI: C.S. Chang, Princeton Plasma Physics Laboratory
+ +PI: William Detmold, Massachusetts Institute of Technology
+ +PI: Noa Marom, Carnegie Mellon University
+ +PI: Theresa Windus, Iowa State University and Ames Laboratory
+ +PI: Walter Hopkins, Argonne National Laboratory
+ +PI: Rick Stevens, Argonne National Laboratory
+ +The following list provides a sampling of the many Director’s Discretionary projects at the ALCF.
+ +PI: Ravi Madduri, Argonne National Laboratory
+ +PI: Jie Liang, University of Illinois at Chicago
+ +PI: Arvind Ramanathan, Argonne National Laboratory
+ +PI: Phay Ho, Argonne National Laboratory
+ +PI: Rafael Vescovi, Argonne National Laboratory
+ +PI: Nicholas Schwarz, Argonne National Laboratory
+ +PI: Jacqueline Cole, University of Cambridge
+ +PI: Murali Emani, Argonne National Laboratory
+ +PI: Joshua New, Oak Ridge National Laboratory
+ +PI: Sicong Wu, Northwestern University
+ +PI: Yiqi Yu, Argonne National Laboratory
+ +PI: Igor A. Bolotnov, North Carolina State University
+ +PI: Parisa Mirbod, University of Illinois at Chicago
+ +PI: John J. Low, Argonne National Laboratory
+ +PI: Trevor David Rhone, Rensselaer Polytechnic Institute
+ +PI: Adrian Giuseppe Del Maestro, University of Tennessee
+ +PI: Alessandro Lovato, Argonne National Laboratory
+ +PI: Eliu Huerta, Argonne National Laboratory
+ + +Andrusenko, I., C. L. Hall, E. Mugnaioli, J. Potticary, S. R. Hall, W. Schmidt, S. Gao, K. Zhao, N. Marom, and M. Gemmi. “True Molecular Conformation and Structure Determination by Three-Dimensional Electron Diffraction of PAH By-Products Potentially Useful for Electronic Applications,” IUCrJ (January 2023), International Union of Crystallography. doi: 10.1107/s205225252201154x
+ +Ashley, W. S., A. M. Haberlie, and V. A. Gensini. “The Future of Supercells in the United States,” Bulletin of the American Meteorological Society (January 2023), American Meteorological Society. doi: 10.1175/BAMS-D-22-0027.1
+ +Babu, A. V., T. Bicer, S. Kandel, T. Zhou, D. J. Ching, S. Henke, S. Veseli, R. Chard, A. Miceli, and M. J. Cherukara. “AI-Assisted Automated Workflow for Real-Time X-ray Ptychography Data Analysis via Federated Resources,” Electronic Imaging (January 2023), Society for Imaging Science and Technology. doi: 10.2352/ei.2023.35.11.hpci-232
+ +Bagusetty, A., A. Panyala, G. Brown, and J. Jirk. “Towards Cross-Platform Portability of Coupled-Cluster Methods with Perturbative Triples using SYCL,” 2022 IEEE/ACM International Workshop on Performance, Portability and Productivity in HPC (P3HPC) (January 2023), Dallas, TX, IEEE. doi: 10.1109/p3hpc56579.2022.00013
+ +Berquist, W., D. Lykov, M. Liu, and Y. Alexeev. “Stochastic Approach for Simulating Quantum Noise Using Tensor Networks,” 2022 IEEE/ACM Third International Workshop on Quantum Computing Software (QCS) (January 2023), Dallas, TX, IEEE. doi: 10.1109/QCS56647.2022.00018
+ +Bieniek, M. K., A. D. Wade, A. P. Bhati, S. Wan, and P. V. Coveney. “TIES 2.0: A Dual-Topology Open Source Relative Binding Free Energy Builder with Web Portal,” Journal of Chemical Information and Modeling (January 2023), ACS Publications. doi: 10.1021/acs.jcim.2c01596
+ +Ceccarelli, L., A. Gnech, L. E. Marcucci, M. Piarulli, and M. Viviani. “Muon Capture on Deuteron Using Local Chiral Potentials,” Frontiers in Physics (January 2023), Frontiers Media SA. doi: 10.3389/fphy.2022.1049919
+ +Chard, R., J. Pruyne, K. McKee, J. Bryan, B. Raumann, R. Ananthakrishnan, K. Chard, and I. T. Foster. “Globus Automation Services: Research Process Automation across the Space-Time Continuum,” Future Generation Computer Systems (January 2023), Elsevier. doi: 10.1016/j.future.2023.01.010
+ +Ćiprijanović. A, A. Lewis, K. Pedro, S. Madireddy, B. Nord G. N. Perdue, Gabriel, S. Wild. “Semi-Supervised Domain Adaptation for Cross-Survey Galaxy Morphology Classification and Anomaly Detection,” Machine Learning and the Physical Sciences Workshop, NeurIPS 2022 (January 2023), US DOE. doi: 10.2172/1915406
+ +Frontiere, N., J. D. Emberson, M. Buehlmann, J. Adamo, S. Habib, K. Heitmann, and C.-A. Faucher-Giguère. “Simulating Hydrodynamics in Cosmology with CRK-HACC,” The Astrophysical Journal (January 2023), IOP Publishing. doi: 10.3847/1538-4365/aca58d
+ +King, G. B., A. Baroni, V. Cirigliano, S. Gandolfi, L. Hayden, E. Mereghetti, S. Pastore, and M. Piarulli. “Ab Initio Calculation of the β-Decay Spectrum of 6He,” Physical Review C (January 2023), APS. doi: 10.1103/PhysRevC.107.015503
+ +Luo, Y., P. Doak, and P. Kent. “A High-Performance Design for Hierarchical Parallelism in the Qmcpack Monte Carlo Code,” 2022 IEEE/ACM International Workshop on Hierarchical Parallelism for Exascale Computing (HiPar) (January 2023), IEEE. doi: 10.1109/HiPar56574.2022.00008
+ +Nastac, G., A. Walden, L. Wang, E. J. Nielsen, Y. Liu, M. Opgenorth, J. Orender, and M. Zubair. “A Multi-Architecture Approach for Implicit Computational Fluid Dynamics on Unstructured Grids,” AIAA SCITECH 2023 Forum (January 2023), National Harbor, MD, AIAA. doi: 10.2514/6.2023-1226
+ +Nealey, I., N. Ferrier, J. Insley, V. A. Mateevitsi, M. E. Papka, and S. Rizzi. “Cinema Transfer: A Containerized Visualization Workflow,” ISC High Performance 2022: High Performance Computing. ISC High Performance 2022 International Workshops (January 2023), Springer Nature, pp. 324-343. doi: 10.1007/978-3-031-23220-6_23
+ +Nikitin, V. “TomocuPy – Efficient GPU-Based Tomographic Reconstruction with Asynchronous Data Processing,” Journal of Synchrotron Radiation (January 2023), vol. 30, International Union of Crystallography, pp. 179-191. doi: 10.1107/s1600577522010311
+ +Novario, S. J., D. Lonardoni, S. Gandolfi, and G. Hagen. “Trends of Neutron Skins and Radii of Mirror Nuclei from First Principles,” Physical Review Letters (January 2023), APS. doi: 10.1103/PhysRevLett.130.032501
+ +Piarulli, M., S. Pastore, R. B. Wiringa, S. Brusilow, and R. Lim. “Densities and Momentum Distributions in A ≤ 12 Nuclei from Chiral Effective Field Theory Interactions,” Physical Review C (January 2023), APS. doi: 10.1103/PhysRevC.107.014314
+ +Ramesh, P. S., and T. K. Patra. “Polymer Sequence Design via Molecular Simulation-Based Active Learning,” Soft Matter (January 2023), vol. 19, no. 2, Royal Society of Chemistry, pp. 282-294. doi: 10.1039/d2sm01193j
+ +Smith, S. E. Belli, O. Meneghini, R. Budiardja, D. Schissel, J. Candy, T. Neiser, and A. Eubanks, “A Vision for Coupling Operation of US Fusion Facilities with HPC Systems and the Implications for Workflows and Data Management,” SMC 2022: Accelerating Science and Engineering Discoveries Through Integrated Research Infrastructure for Experiment, Big Data, Modeling and Simulation (January 2023), Springer Nature, pp. 87-100. doi: 10.1007/978-3-031-23606-8_6
+ +Su, Z., S. Di, A. M. Gok, Y. Cheng, and F. Cappello. “Understanding Impact of Lossy Compression on Derivative-Related Metrics in Scientific Datasets,” 2022 IEEE/ACM 8th International Workshop on Data Analysis and Reduction for Big Scientific Data (DRBSD) (January 2023), Dallas, TX, IEEE. doi: 10.1109/DRBSD56682.2022.00011
+ +Tavakol, M., J. Liu, S. E. Hoff, C. Zhu, and H. Heinz. “Osteocalcin: Promoter or Inhibitor of Hydroxyapatite Growth?,” Langmuir (January 2023), ACS. doi: 10.1021/acs.langmuir.3c02948
+ +Thiyagalingam, J., G. von Laszewski, J. Yin, M. Emani, J. Papay, G. Barrett, P. Luszczek, A. Tsaris, C. Kirkpatrick, F. Wang, T. Gibbs, V. Vishwanath, M. Shankar, G. Gox, and T. Hey. “AI Benchmarking for Science: Efforts from the MLCommons Science Working Group,” High Performance Computing. ISC High Performance 2022 International Workshops (January 2023), Springer Nature. doi: 10.1007/978-3-031-23220-6_4
+ +Valetov, E. “Beamline Design and Optimisation for High Intensity Muon Beams at PSI,” Journal of Physics: Conference Series (January 2023), vol. 2420, Bangkok, Thailand, IOP Publishing. doi: 10.1088/1742-6596/2420/1/012053
+ +Yu, Y., E. Shemon, and E. Merzari. “LES Simulation on Heavy Liquid Metal Flow in a Bare Rod Bundle for Assessment of Turbulent Prandtl Number,” Nuclear Engineering and Design (January 2023), Elsevier. doi: 10.1016/j.nucengdes.2023.112175
+ +Wang, T., and A. Burrows. “Effects of Different Closure Choices in Core-collapse Supernova Simulations,” The Astrophysical Journal (January 2023), IOP Publishing. doi: 10.3847/1538-4357/aca75c
+ +Zhu, H., and N. Y. Gnedin. “Cosmic Reionization on Computers: Baryonic Effects on Halo Concentrations during the Epoch of Reionization,” The Astrophysical Journal (January 2023), IOP Publishing. doi: 10.3847/1538-4357/aca1b3
+ +Banik, S., D. Dhabal, H. Chan, S. Manna, M. Cherukara, V. Molinero, and S. K. R. S. Sankaranarayanan. “CEGANN: Crystal Edge Graph Attention Neural Network for Multiscale Classification of Materials Environment,” npj Computational Materials (February 2023), Springer Nature. doi: 10.1038/s41524-023-00975-z
+ +Clyde, A., X. Liu, T. Brettin, H. Yoo, A. Partin, Y. Babuji, B. Blaiszik, J. Mohd-Yusof, A. Merzky, M. Turilli, S. Jha, A. Ramanathan, and R. Stevens. “AI-Accelerated Protein-Ligand Docking for SARS-CoV-2 Is 100-Fold Faster with No Significant Change in Detection,” Scientific Reports (February 2023), Springer Nature. doi: 10.1038/s41598-023-28785-9
+ +Jaysaval, P., G. Hammond, and T. C. Johnson. “Massively Parallel Modeling and Inversion of Electrical Resistivity Tomography Data Using PFLOTRAN,” Geoscientific Model Development (February 2023), European Geosciences Union. doi: 10.5194/gmd-2022-66
+ +Musaelian, A., S. Batzner, A. Johansson, L. Sun, C. J. Owen, M. Kornbluth, and B. Kozinsky. “Learning Local Equivariant Representations for Large-Scale Atomistic Dynamics,” Nature Communications (February 2023), Springer Nature. doi: 10.1038/s41467-023-36329-y
+ +Raskar, S., T. Applencourt, K. Kumaran, and G. Gao. “Towards Maximum Throughput of Dataflow Software Pipeline under Resource Constraints,” PMAM’23: Proceedings of the 14th International Workshop on Programming Models and Applications for Multicores and Manycores (February 2023), ACM, pp. 20-28. doi: 10.1145/3582514.3582521
+ +Rigo, M., B. Hall, M. Hjorth-Jensen, A. Lovato, and F. Pederiva. “Solving the Nuclear Pairing Model with Neural Network Quantum States,” Physical Review E (February 2023), APS. doi: 10.1103/PhysRevE.107.025310
+ +Shao, X., C. Zhu, P. Kumar, Y. Wang, J. Lu, M. Cha, L. Yao, Y. Cao, X. Mao, H. Heinz, and N. A. Kotov. “Voltage-Modulated Untwist Deformations and Multispectral Optical Effects from Ion Intercalation into Chiral Ceramic Nanoparticles,” Advanced Materials (February 2023), John Wiley and Sons. doi: 10.1002/adma.202206956
+ +Vallejo, J. L. G., G. M. Tow, E. J. Maginn, B. Q. Pham, D. Datta, and M. S. Gordon. “Quantum Chemical Modeling of Propellant Degradation,” The Journal of Physical Chemistry A (February 2023), ACS Publications. doi: 10.1021/acs.jpca.2c08722
+ +Allcroft, S., M. Metwaly, Z. Berg, I. Ghodgaonakar, F. Bordwell, X. Zhao, X. Liu, J. Xu, S. Chakraborty, V. Banna, A. Chinnakotla, A. Goel, C. Tung, G. Kao, W. Zakharov, D. A. Shoham, G. K. Thiruvathukal, and Y.-H. Lu. “Observing Human Mobility Internationally During COVID-19,” Computer (March 2023), vol. 56, IEEE, pp. 59-69. doi: 10.1109/MC.2022.3175751
+ +Chitty-Venkata, K. T., M. Emani, V. Vishwanath, and A. K. Somani. “Neural Architecture Search Benchmarks: Insights and Survey,” IEEE Access (March 2023), IEEE. doi: 10.1109/ACCESS.2023.3253818
+ +Condon, L. E., A. Farley, S. Jourdain, P. O’leary, P. Avery, L. Gallagher, C. Chennault, and R. M. Maxwell. “ParFlow Sand Tank: A Tool for Groundwater Exploration,” The Journal of Open Source Education (March 2023), Open Source Initiative. doi: 10.21105/jose.00179
+ +Dorier, M., Z. Wang, S. Ramesh, U. Ayachit, S. Snyder, R. Ross, and M. Parashar. “Towards Elastic In Situ Analysis for High-Performance Computing Simulations,” Journal of Parallel and Distributed Computing (March 2023), Elsevier. doi: 10.1016/j.jpdc.2023.02.014
+ +Hausen, R., B. E. Robertson, H. Zhu, N. Y. Gnedin, P. Madau, E. E. Schneider, B. Villasenor, and N. E. Drakos. “Revealing the Galaxy–Halo Connection through Machine Learning,” The Astrophysical Journal Letters (March 2023), IOP Publishing. doi: 10.3847/1538-4357/acb25c
+ +Huang, B., O. Anatole von Lilienfeld, J. T. Krogel, and A. Benali. “Toward DMC Accuracy Across Chemical Space with Scalable Δ-QML,” Journal of Chemical Theory and Computation (March 2023), ACS Publications. doi: 10.1021/acs.jctc.2c01058
+ +Huang, J., Y.-F. Jiang, H. Feng, S. W. Davis, J M. Stone, and M. J. Middleton. “Global 3D Radiation Magnetohydrodynamic Simulations of Accretion onto a Stellar-Mass Black Hole at Sub- and Near-Critical Accretion Rates,” The Astrophysical Journal (March 2023), IOP Publishing. doi: 10.3847/1538-4357/acb6fc
+ +Joshi, A. V., S. G. Rosofsky, R. Haas, and E. A. Huerta. “Numerical Relativity Higher Order Gravitational Waveforms of Eccentric, Spinning, Nonprecessing Binary Black Hole Mergers,” Physical Review D (March 2023), APS. doi: 10.1103/PhysRevD.107.064038
+ +Kumari, S., T. Masubuchi, H. S. White, A. Alexandrova, S. L. Anderson, and P. Sautet. “Electrocatalytic Hydrogen Evolution at Full Atomic Utilization over ITO-Supported Sub-nano-Ptn Clusters: High, Size-Dependent Activity Controlled by Fluxional Pt Hydride Species,” Journal of the American Chemical Society (March 2023), ACS Publications. doi: 10.1021/jacs.2c13063
+ +Kumari, S., and P. Sautet. “Elucidation of the Active Site for the Oxygen Evolution Reaction on a Single Pt Atom Supported on Indium Tin Oxide,” The Journal of Physical Chemistry Letters (March 2023), ACS. doi: 10.1021/acs.jpclett.3c00160
+ +Li, H., A. Hu, and G. A. Meehl. “Role of Tropical Cyclones in Determining ENSO Characteristics,” Geophysical Research Letters (March 2023), American Geophysical Union. doi: 10.1029/2022gl101814
+ +Maris, P., H. Le, A. Nogga, R. Roth, and J. P. Vary. “Uncertainties in Ab Initio Nuclear Structure Calculations with Chiral Interactions,” Frontiers in Physics (March 2023), Frontiers Media SA. doi: 10.3389/fphy.2023.1098262
+ +Nelson, J., T. K. Stanev, D. Lebedev, T. LaMountain, J. Tyler Gish, H. Zeng, H. Shin, O. Heinonen, K. Watanabe, T. Taniguchi, M. C. Hersam, and N. P. Stern. “Layer-Dependent Optically Induced Spin Polarization in InSe,” Physical Review B (March 2023), APS. doi: 10.1103/PhysRevB.107.115304
+ +Pal, S., J. Wang, J. Feinstein, E. Yan, and V. R. Kotamarthi. “Projected Changes in Extreme Streamflow and Inland Flooding in the Mid-21st Century over Northeastern United States Using Ensemble WRF-Hydro Simulations,” Journal of Hydrology: Regional Studies (March 2023), Elsevier. doi: 10.1016/j.ejrh.2023.101371
+ +Seyitliyev, D., X. Qin, M. K. Jana, S. M. Janke, X. Zhong, W. You, D. B. Mitzi, V. Blum, and K. Gundogdu. “Coherent Phonon-Induced Modulation of Charge Transfer in 2D Hybrid Perovskites,” Advanced Functional Materials (March 2023), John Wiley and Sons. doi: 10.1002/adfm.202213021
+ +Sharma, H., M. Shrivastava, and B. Singh. “Physics Informed Deep Neural Network Embedded in a Chemical Transport Model for the Amazon Rainforest,” npj Climate and Atmospheric Science (March 2023), Springer Nature. doi: 10.1038/s41612-023-00353-y
+ +Shen, K., S. Kumari, Y.-C. Huang, J. Jang, P. Sautet, and C. G. Morales-Guio. “Electrochemical Oxidation of Methane to Methanol on Electrodeposited Transition Metal Oxides,” Journal of the American Chemical Society (March 2023), ACS Publications. doi: 10.1021/jacs.3c00441
+ +Shepard, C., D. C. Yost, and Y. Kanai. “Electronic Excitation Response of DNA to High-Energy Proton Radiation in Water,” Physical Review Letters (March 2023), APS. doi: 10.1103/PhysRevLett.130.118401
+ +Van den Puttelaar, R., R. G. S. Meester, E. E. P. Peterse, A. G. Zauber, J. Zheng, R. B. Hayes, Y.-R. Su, J. K. Lee, M. Thomas, L. C. Sakoda, Y. Li, D. A. Corley, U. Peters, L. Hsu, and I. Lansdorp-Vogelaar. “Risk-Stratified Screening for Colorectal Cancer Using Genetic and Environmental Risk Factors: A Cost-Effectiveness Analysis Based on Real-World Data,” Clinical Gastroenterology and Hepatology (March 2023), Elsevier. doi: 10.1016/j.cgh.2023.03.003
+ +Zhang, Z. T. Masubuchi, P. Sautet, S. L. Anderson, and A. N. Alexandrova. “Hydrogen Evolution on Electrode-Supported Ptn Clusters: Ensemble of Hydride States Governs the Size Dependent Reactivity,” Angewandte Chemie (March 2023), John Wiley and Sons. doi: 10.1002/anie.202218210
+ +Applencourt, T., B. Videau, J. Le Quellec, A. Dufek, K. Harms, N. Liber, B. Allen, and A. Belton-Schure. “Standardizing Complex Numbers in SYCL,” IWOCL ‘23: Proceedings of the 2023 International Workshop on OpenCL (April 2023), ACM, pp. 1-6. doi: 10.1145/3585341.3585343
+ +Clarke, R. W., T. Sandmeier, K. A. Franklin, D. Reich, X. Zhang, N. Vengallur, T. K. Patra, R. J. Tannenbaum, S. Adhikari, S. K. Kumar, T. Rovis, and E. Y.-X. Chen. “Dynamic Crosslinking Compatibilizes Immiscible Mixed Plastics,” Nature (April 2023), Springer Nature. doi: 10.1038/s41586-023-05858-3
+ +Fedorov, D. G., and B. Q. Pham. “Multi-Level Parallelization of Quantum-Chemical Calculations,” The Journal of Chemical Physics (April 2023), AIP. doi: 10.1063/5.0144917
+ +Fragola, N. R., B. M. Brems, M. Mukherjee, M. Cui, and R. G. Booth. “Conformationally Selective 2-Aminotetralin Ligands Targeting the alpha2A- and alpha2C-Adrenergic Receptors,” ACS Chemical Neuroscience (April 2023), ACS. doi: 10.1021/acschemneuro.3c00148
+ +Gao, X., A. D. Hanlon, J. Holligan, N. Karthik, S. Mukherjee, P. Petreczky, S. Syritsyn, and Y. Zhao. “Unpolarized Proton PDF at NNLO from Lattice QCD with Physical Quark Masses,” Physical Review D (April 2023), APS. doi: 10.1103/PhysRevD.107.074509
+ +Holford, J. J., M. Lee, and Y. Hwang. “Optimal White-Noise Stochastic Forcing for Linear Models of Turbulent Channel Flow,” Journal of Fluid Mechanics (April 2023), Cambridge University Press. doi: 10.1017/jfm.2023.234
+ +Ichibha, T., K. Saritas, J. T. Krogel, Y. Luo, P. R. C. Kent, and F. A. Reboredo. “Existence of La-site Antisite Defects in LaMO3 (M=Mn, Fe, and Co) Predicted with Many-Body Diffusion Quantum Monte Carlo,” Scientific Reports (April 2023), Springer Nature. doi: 10.1038/s41598-023-33578-1
+ +Lavroff, R. H., J. Wang, M. G. White, P. Sautet, and A. N. Alexandrova. “Mechanism of Stoichiometrically Governed Titanium Oxide Brownian Tree Formation on Stepped Au(111),” The Journal of Physical Chemistry C (April 2023), ACS. doi: 10.1021/acs.jpcc.3c00715
+ +Li, K., and D. Qi. “Molecular Dynamics Simulation of Mechanical Properties of Carbon Nanotube Reinforced Cellulose,” Journal of Molecular Modeling (April 2023), Springer Nature. doi: 10.1007/s00894-023-05542-3
+ +Lytle, A., C. DeTar, A. X. El-Khadra, E. Gámiz, S. Gottlieb, W. Jay, A. Kronfeld, J. N. Simone, and A. Vaquero. “B-meson Semileptonic Decays with Highly Improved Staggered Quarks,” The 39th International Symposium on Lattice Field Theory (LATTICE2022) (April 2023), Sissa Medialab. doi: 10.22323/1.430.0418
+ +Matthews, B., J. Hall, M. Batty, S. Blainey, N. Cassidy, R. Choudhary, D. Coca, S. Hallett, J. J. Harou, P. James, N. Lomax, P. Oliver, A. Sivakumar, T. Tryfonas, and L. Varga. “DAFNI: A Computational Platform to Support Infrastructure Systems Research,” Proceedings of the Institution of Civil Engineers - Smart Infrastructure and Constrution (April 2023), Emerald Publishing Limited. doi: 10.1680/jsmic.22.00007
+ +Pham, B. Q., L. Carrington, A. Tiwari, S. S. Leang, M. Alkan, C. Bertoni, D. Datta, T. Sattasathuchana, P. Xu, and M. S. Gordon. “Porting Fragmentation Methods to GPUs Using an OpenMP API: Offloading the Resolution-of-the-Identity Second-Order Møller-Plesset Perturbation Method,” The Journal of Chemical Physics (April 2023), AIP Publishing. doi: 10.1063/5.0143424
+ +Rhone, T. D., R. Bhattarai, H. Gavras, B. Lusch, M. Salim, M. Mattheakis, D. T. Larson, Y. Krockenberger, and E. Kaxiras. “Artificial Intelligence Guided Studies of van der Waals Magnets,” Advanced Theory and Simulations (April 2023), John Wiley and Sons. doi: doi.org/10.1002/adts.202300019
+ +Yang, L., R. Jaramillo, R. K. Kalia, A. Nakano, and P. Vashishta. “Pressure-Controlled Layer-by-Layer to Continuous Oxidation of ZrS2(001) Surface,” ACS Nano (April 2023), ACS. doi: 10.1021/acsnano.2c12724
+ +Yang, T. T., and W. A. Saidi. “Simple Approach for Reconciling Cyclic Voltammetry with Hydrogen Adsorption Energy for Hydrogen Evolution Exchange Current,” The Journal of Physical Chemistry Letters (April 2023), ACS. doi: 10.1021/acs.jpclett.3c00534
+ +Bazavov, A., C. DeTar, A. X. El-Khadra, E. Gámiz, Z. Gelzer, S. Gottlieb, W. I. Jay, H. Jeong, A. S. Kronfeld, R. Li, A. T. Lytle, P. B. Mackenzie, E. T. Neil, T. Primer, J. N. Simone, R. L. Sugar, D. Toussaint, R. S. Van de Water, and A. Vaquero. “D-meson Semileptonic Decays to Pseudoscalars from Four-Flavor Lattice QCD,” Physical Review D (May 2023), APS. doi: 10.1103/PhysRevD.107.094516
+ +Bhati, A. P., A. Hoti, A. Potterton, M. K. Bieniek, and P. V. Coveney. “Long Time Scale Ensemble Methods in Molecular Dynamics: Ligand–Protein Interactions and Allostery in SARS-CoV-2 Targets,” Journal of Chemical Theory and Computation (May 2023), ACS. doi: 10.1021/acs.jctc.3c00020
+ +Fearick, R. W., P. von Neumann-Cosel, S. Bacca, J. Birkhan, F. Bonaiti, I. Brandherm, G. Hagen, H. Matsubara, W. Nazarewicz, N. Pietralla, V. Y. Ponomarev, P.-G. Reinhard, X. Roca-Maza, A. Richter, A. Schwenk, J. Simonis, and A. Tamii. “Electric Dipole Polarizability of 40Ca,” Physical Review Research (May 2023), APS. doi: 10.1103/PhysRevResearch.5.L022044
+ +Ibayashi, H., T. M. Razakh, L. Yang, T. Linker, M. Olguin, S. Hattori, Y. Luo, R. K. Kalia, A. Nakano, K. Nomura, and P. Vashishta. “Allegro-Legato: Scalable, Fast, and Robust Neural-Network Quantum Molecular Dynamics via Sharpness-Aware Minimization,” ISC High Performance 2023: High Performance Computing (May 2023), Springer Link, pp. 223-239. doi: 10.1007/978-3-031-32041-5_12
+ +Kang, S., and E. M. Constantinescu. “Learning Subgrid-Scale Models with Neural Ordinary Differential Equations,” Computers and Fluids</i> (May 2023), Elsevier. doi: 10.1016/j.compfluid.2023.105919
+ +Nascimento de Lima, P., R. van den Puttelaar, A. I. Hahn, M. Harlass, N. Collier, J. Ozik, A. G. Zauber, I. Lansdorp-Vogelaar, and C. M. Rutter. “Projected Long-Term Effects of Colorectal Cancer Screening Disruptions Following the COVID-19 Pandemic,” eLife (May 2023), eLife Sciences. doi: 10.7554/eLife.85264
+ +Ramos-Valle, A. N., A. F. Prein, M. Ge, D. Wang, and S. E. Giangrande. “Grid Spacing Sensitivities of Simulated Mid-Latitude and Tropical Mesoscale Convective Systems in the Convective Gray Zone,” Journal of Geophysical Research: Atmospheres (May 2023), American Geophysical Union. doi: 10.1029/2022jd037043
+ +Rosofsky, S. G., H. Al Majed, and E. A. Huerta. “Applications of Physics Informed Neural Operators,” Machine Learning: Science and Technology (May 2023), IOP. doi: 10.1088/2632-2153/acd168
+ +Shen, S., S. Elhatisari, T. A. Lähde, D. Lee, B.-N. Lu, and U.-G. Meißner. “Emergent Geometry and Duality in the Carbon Nucleus,” Nature Communications (May 2023), Springer Nature. doi: 10.1038/s41467-023-38391-y
+ +Su, Q., J. Larson, T. N. Dalichaouch, F. Li, W. An, L. Hildebrand, Y. Zhao, V. Decyk, P. Alves, S. M. Wild, and W. B. Mori. “Optimization of Transformer Ratio and Beam Loading in a Plasma Wakefield Accelerator with a Structure-Exploiting Algorithm,” Physics of Plasmas (May 2023), AIP Publishing. doi: 10.1063/5.0142940
+ +Swaminathan, B., J. Kang, K. Vaidya, A. Srinivasan, P. Kumar, S. Byna, and D. Barbarash. “Crowd Cluster Data in the USA for Analysis of Human Response to COVID-19 Events and Policies,” Scientific Data (May 2023), Springer Nature. doi: 10.1038/s41597-023-02176-1
+ +Vartanyan, D., A. Burrows, T. Wang, M. S. B. Coleman, and C. J. White. “Gravitational-Wave Signature of Core-Collapse Supernovae,” Physical Review D (May 2023), APS. doi: 10.1103/PhysRevD.107.103015
+ +Afle, C., S. K. Kundu, J. Cammerino, E. R. Coughlin, D. A. Brown, D. Vartanyan, and A. Burrows. “Measuring the Properties of f−Mode Oscillations of a Protoneutron Star by Third-Generation Gravitational-Wave Detectors,” Physical Review D (June 2023), APS. doi: 10.1103/PhysRevD.107.123005
+ +Anaya, J. J., A. Tropina, R. Miles, and M. Grover. “Refractive Index of Diatomic Species for Nonequilibrium Flows,” AIAA AVIATION 2023 Forum (June 2023), San Diego, CA, AIAA. doi: 10.2514/6.2023-3478
+ +Ayush, K., A. Seth, and T. K. Patra. “nanoNET: Machine Learning Platform for Predicting Nanoparticles Distribution in a Polymer Matrix,” Soft Matter (June 2023), Royal Society of Chemistry. doi: 10.1039/d3sm00567d
+ +Bazavov, A., C. Davies, C. DeTar, A. X. El-Khadra, E. Gámiz, S. Gottlieb, W. I. Jay, H. Jeong, A. S. Kronfeld, S. Lahert, G. P. Lepage, M. Lynch, A. T. Lytle, P. B. Mackenzie, C. McNeile, E. T. Neil, C. T. Peterson, C. Ray, J. N. Simone, R. S. Van de Water, and A. Vaquero. “Light-Quark Connected Intermediate-Window Contributions to the Muon g − 2 Hadronic Vacuum Polarization from Lattice QCD,” Physical Review D (June 2023), APS. doi: 10.1103/physrevd.107.114514
+ +Bera, M., Q. Zhang, X. Zuo, W. Bu, J. Strzalka, S. Weigand, J. Ilavsky, E. Dufresne, S. Narayanan, and B. Lee. “Opportunities of Soft Materials Research at Advanced Photon Source,” Synchrotron Radiation News (June 2023), Informa UK Limited. doi: 10.1080/08940886.2023.2204096
+ +Chen, J., R. G. Edwards, and W. Mao. “Graph Contractions for Calculating Correlation Functions in Lattice QCD,” PASC ‘23: Proceedings of the Platform for Advanced Scientific Computing Conference (June 2023), ACM. doi: 10.1145/3592979.3593409
+ +Cruz-Camacho, E., K. A. Brown, X. Wang, X. Xu, K. Shu, Z. Lan, R. B. Ross, and C. D. Carothers. “Hybrid PDES Simulation of HPC Networks Using Zombie Packets,” SIGSIM-PADS ‘23: Proceedings of the 2023 ACM SIGSIM Conference on Principles of Advanced Discrete Simulation (June 2023), ACM. doi: 10.1145/3573900.3591122
+ +Giraurd, S., J. C. Zamora, R. G. T. Zegers, D. Bazin, Y. Ayyad, S. Bacca, S. Beceiro-Novo, B. A. Brown, A. Carls, J. Chen, M. Cortesi, M. DeNudt, G. Hagen, C. Hultquist, C. Maher, W. Mittig, F. Ndayisabye, S. Noji, S. J. Novario, J. Peneira, Z. Rahman, J. Schmitt, M. Serikow, L. J. Sun, J. Surbrook, N. Watwood, and T. Wheeler. “β+ Gamow-Teller Strengths from Unstable 14O via the (d, 2He) Reaction in Inverse Kinematics,” Physical Review Letters (June 2023), APS. doi: 10.1103/PhysRevLett.130.232301
+ +Grover, M. S., P. Valentini, N. J. Bisek, and A. M. Verhoff. “First Principle Simulation of CUBRC Double Cone Experiments,” AIAA AVIATION 2023 Forum (June 2023), San Diego, CA, AIAA. doi: 10.2514/6.2023-3735
+ +Hamilton, A., J.-M. Qiu, and H. Zhang. “Scalable Riemann Solvers with the Discontinuous Galerkin Method for Hyperbolic Network Simulation,” PASC ‘23: Proceedings of the Platform for Advanced Scientific Computing Conference (June 2023), ACM, pp. 1-10. doi: 10.1145/3592979.3593421
+ +Kale, B., A. Clyde, M. Sun, A. Ramanathan, R. Stevens, and M. E. Papka. “ChemoGraph: Interactive Visual Exploration of the Chemical Space,” Computer Graphics Forum (June 2023), John Wiley and Sons. doi: 10.1111/cgf.14807
+ +Kale, B., M. Sun, and M. E. Papka. “The State of the Art in Visualizing Dynamic Multivariate Networks,” Computer Graphics Forum (June 2023), John Wiley and Sons. doi: 10.1111/cgf.14856
+ +Kang, S., A. Dener, A. Hamilton, H. Zhang, E. M. Constantinescu, and R. L. Jacob. “Multirate Partitioned Runge–Kutta Methods for Coupled Navier–Stokes Equations,” Computers and Fluids (June 2023), Elsevier. doi: 10.1016/j.compfluid.2023.105964
+ +Korover, I., and the CLAS Collaboration. “Observation of Large Missing-Momentum (e, e′p) Cross-Section Scaling and the Onset of Correlated-Pair Dominance in Nuclei,” Physical Review C (June 2023), APS. doi: 10.1103/PhysRevC.107.L061301
+ +Liu, J., S. Di, K. Zhao, X. Liang, Z. Chen, and F. Cappello. “FAZ: A Flexible Auto-Tuned Modular Error-Bounded Compression Framework for Scientific Data,” ICS ‘23: Proceedings of the 37th International Conference on Supercomputing (June 2023), ACM, pp. 1-13. doi: 10.1145/3577193.3593721
+ +Liu, Q., W. Jiang, J. Xu, Y. Xu, Z. Yang, D.-J. Yoo, K. Z. Pupek, C. Wang, C. Liu, K. Xu, and Z. Zhang. “A Fluorinated Cation Introduces New Interphasial Chemistries to Enable High-Voltage Lithium Metal Batteries,” Nature Communications (June 2023), Springer Nature. doi: 10.1038/s41467-023-38229-7
+ +Madhyastha, M., R. Underwood, R. Burns, and B. Nicolae. “DStore: A Lightweight Scalable Learning Model Repository with Fine-Grain Tensor-Level Access,” ICS ‘23: Proceedings of the 37th International Conference on Supercomputing (June 2023), ACM, pp. 133-143. doi: 10.1145/3577193.3593730
+ +Nicholson, G. L., L. Szajnecki, L. Duan, and N. J. Bisek. “Direct Numerical Simulation of High-Speed Boundary-Layer Separation Due to Backward Facing Curvature,” AIAA AVIATION 2023 Forum (June 2023), San Diego, CA, AIAA. doi: 10.2514/6.2023-3562
+ +Park, H., R. Zhu, E. A. Huerta, S. Chaudhuri, E. Tajkhorshid, and D. Cooper. “End-to-End AI Framework for Interpretable Prediction of Molecular and Crystal Properties,” Machine Learning: Science and Technology (June 2023), IOP Publishing. doi: 10.1088/2632-2153/acd434
+ +Purcell, T. A. R., M. Scheffler, L. M. Ghiringhelli, and C. Carbogno. “Accelerating Materials-Space Exploration for Thermal Insulators by Mapping Materials Properties via Artificial Intelligence,” npj Computational Materials (June 2023), Springer Nature. doi: 10.1038/s41524-023-01063-y
+ +Scheld, W. S., K. Kim, C. Schwab, A. C. Moy, S.-K. Jiang, M. Mann, C. Dellen, Y. J. Sohn, S. Lobe, M. Ihrig, M. G. Danner, C.-Y. Chang, S. Uhlenbruck, E. D. Wachsman, B. J. Hwang, J. Sakamoto, L. F. Wan, B. C. Wood, M. Finsterbusch, and D. Fattakhova-Rohlfing. “The Riddle of Dark LLZO: Cobalt Diffusion in Garnet Separators of Solid-State Lithium Batteries,” Advanced Functional Materials (June 2023), John Wiley and Sons. doi: 10.1002/adfm.202302939
+ +Shah, A., A. Ramanathan, V. Hayot-Sasson, and R. Stevens. “Causal Discovery and Optimal Experimental Design for Genome-Scale Biological Network Recovery,” PASC ‘23: Proceedings of the Platform for Advanced Scientific Computing Conference (June 2023), ACM, pp. 1-11. doi: 10.1145/3592979.3593400
+ +Shah, M., X. Yu, S. Di, M. Becchi, and F. Cappello. “Lightweight Huffman Coding for Efficient GPU Compression,” ICS ‘23: Proceedings of the 37th International Conference on Supercomputing (June 2023), ACM, pp. 99-110. doi: 10.1145/3577193.3593736
+ +Singh, S., O. Ruwase, A. A. Awan, S. Rajbhandari, Y. He, and A. Bhatele. “A Hybrid Tensor-Expert-Data Parallelism Approach to Optimize Mixture-of-Experts Training,” ICS ‘23: Proceedings of the 37th International Conference on Supercomputing (June 2023), ACM, pp. 203-214. doi: 10.1145/3577193.3593704
+ +White, C. J., P. D. Mullen, Y.-F. Jiang, S. W. Davis, J. M. Stone, V. Morozova, and L. Zhang. “An Extension of the Athena++ Code Framework for Radiation-Magnetohydrodynamics in General Relativity Using a Finite-Solid-Angle Discretization,” The Astrophysical Journal (June 2023), IOP Publishing. doi: 10.3847/1538-4357/acc8cf
+ +Xu, X., X. Wang, E. Cruz-Camacho, C. D. Carothers, K. A. Brown, R. B. Ross, Z. Lan, and K. Shu. “Machine Learning for Interconnect Network Traffic Forecasting: Investigation and Exploitation,” SIGSIM-PADS ‘23: Proceedings of the 2023 ACM SIGSIM Conference on Principles of Advanced Discrete Simulation (June 2023), ACM. doi: 10.1145/3573900.3591123
+ +Bollweg, D., D. A. Clarke, J. Goswami, O. Kaczmarek, F. Karsch, S. Mukherjee, P. Petreczky, C. Schmidt, and S. Sharma. “Equation of State and Speed of Sound of (2 + 1)-Flavor QCD in Strangeness-Neutral Matter at Nonvanishing Net Baryon-Number Density,” Physical Review D (July 2023), APS. doi: 10.1103/PhysRevD.108.014510
+ +Bolotnov, I. A. “Direct Numerical Simulation of Single- and Two-Phase Flows for Nuclear Engineering Geometries,” Nuclear Technology (July 2023), Informa UK Limited. doi: 10.1080/00295450.2023.2232222
+ +Conroy, N. S., M. Bauböck, V. Dhruv, D. Lee, A. E. Broderick, C. Chan, B. Georgiev, A. V. Joshi, B. Prather, and C. F. Gammie. “Rotation in Event Horizon Telescope Movies,” The Astrophysical Journal (July 2023), IOP Publishing. doi: 10.3847/1538-4357/acd2c8
+ +Dive, A., K. Kim, S. Kang, L. F. Wan, and B. C. Wood. “First-Principles Evaluation of Dopant Impact on Structural Deformability and Processability of Li7La3Zr2O12,” Physical Chemistry Chemical Physics (July 2023), Royal Society of Chemistry. doi: 10.1039/d2cp04382c
+ +Fore, B., J. M. Kim, G. Carleo, M. Hjorth-Jensen, A. Lovato, and M. Piarulli. “Dilute Neutron Star Matter from Neural-Network Quantum States,” Physical Review Research (July 2023), APS. doi: 10.1103/PhysRevResearch.5.033062
+ +Galda, A., E. Gupta, J. Falla, X. Liu, D. Lykov, Y. Alexeev, and I. Safro. “Similarity-Based Parameter Transferability in the Quantum Approximate Optimization Algorithm,” Frontiers in Quantum Science and Technology (July 2023), Frontiers Media SA. doi: 10.3389/frqst.2023.1200975
+ +Guo, J., V. Woo, D. A. Andersson, N. Hoyt, M. Williamson, I. Foster, C. Benmore, N. E. Jackson, and G. Sivaraman. “AL4GAP: Active Learning Workflow for Generating DFT-SCAN Accurate Machine-Learning Potentials for Combinatorial Molten Salt Mixtures,” The Journal of Chemical Physics (July 2023), AIP Publishing. doi: 10.1063/5.0153021
+ +Hong, Z., A. Ajith, J. Pauloski, E. Duede, K. Chard, and I. Foster. “The Diminishing Returns of Masked Language Models to Science,” Findings of the Association for Computational Linguistics: ACL 2023 (July 2023), ACL, pp. 1270-1283. doi: 10.18653/v1/2023.findings-acl.82
+ +Huerta, E. A., B. Blaiszik, L. C. Brinson, K. E. Bouchard, D. Diaz, C. Doglioni, J. M. Duarte, M. Emani, I. Foster, G. Fox, P. Harris, L. Heinrich, S. Jha, D. S. Katz, V. Kindratenko, C. R. Kirkpatrick, K. Lassila-Perini, R. K. Madduri, M. S. Neubauer, F. E. Psomopoulos, A. Roy, O. Rübel, Z. Zhao, and R. Zhu. “FAIR for AI: An Interdisciplinary and International Community Building Perspective,” Scientific Data (July 2023), Springer Nature. doi: 10.1038/s41597-023-02298-6
+ +Jeong, H., A. K. Turner, A. F. Roberts, M. Veneziani, S. F. Price, X. S. Asay-Davis, L. P. Van Roekel, W. Lin, P. M. Caldwell, H.-S. Park, J. D. Wolfe, and A. Mametjanov. “Southern Ocean Polynyas and Dense Water Formation in a High-Resolution, Coupled Earth System Model,” The Cryosphere (July 2023), Copernicus Publications. doi: 10.5194/tc-17-2681-2023
+ +Kayastha, M. B., C. Huang, J. Wang, W. J. Pringle, TC Chakraborty, Z. Yang, R. D. Hetland, Y. Qian, and P. Xue. “Insights on Simulating Summer Warming of the Great Lakes: Understanding the Behavior of a Newly Developed Coupled Lake-Atmosphere Modeling System,” Journal of Advances in Modeling Earth Systems (July 2023), John Wiley and Sons. doi: 10.1029/2023MS003620
+ +Liu, Z., R. Kettimuthu, M. E. Papka, and I. Foster. “FreeTrain: A Framework to Utilize Unused Supercomputer Nodes for Training Neural Networks,” 2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid) (July 2023), Bangalore, India, IEEE. doi: 10.1109/ccgrid57682.2023.00036
+ +Mahadevan, V., D. Lenz, I. Grindeanu, and T. Peterka. “Accelerating Multivariate Functional Approximation Computation with Domain Decomposition Techniques,” Computational Science – ICCS 2023: 23rd International Conference (July 2023), Prague, Czech Republic, ACM, pp. 89-103. doi: 10.1007/978-3-031-35995-8_7
+ +Manassa, J., J. Schwartz, Y. Jiang, H. Zheng, J. A. Fessler, Z. W. Di, and R. Hovden. “Dose Requirements for Fused Multi-Modal Electron Tomography,” Microscopy and Microanalysis (July 2023), Oxford University Press. doi: 10.1093/micmic/ozad067.1019
+ +Peterka, T., D. Morozov, O. Yildiz, B. Nicolae, and P. E. Davis. “LowFive: In Situ Data Transport for High-Performance Workflows,” 2023 IEEE International Parallel and Distributed Processing Symposium (IPDPS) (July 2023), St. Petersburg, FL, IEEE. doi: 10.1109/IPDPS54959.2023.00102
+ +Rosofsky, S. G., and E. A. Huerta. “Magnetohydrodynamics with Physics Informed Neural Operators,” Machine Learning: Science and Technology (July 2023), IOP Publishing. doi: 10.1088/2632-2153/ace30a
+ +Schanen, M., S. H. K. Narayanan, S. Williamson, V. Churavy, W. S. Moses, and L. Paehler. “Transparent Checkpointing for Automatic Differentiation of Program Loops Through Expression Transformations,” Computational Science – ICCS 2023: 23rd International Conference (July 2023), Prague, Czech Republic, ACM, pp. 483-497. doi: 10.1007/978-3-031-36024-4_37
+ +Shovon, A. R., T. Gilray, K. Micinski, and S. Kumar. “Towards Iterative Relational Algebra on the GPU,” 2023 USENIX Annual Technical Conference (USENIX ATC 23) (July 2023), Boston, MA, USENIX Association, pp. 1009-1016,
+ +Sun, Z. H., G. Hagen, and T. Papenbrock. “Coupled-Cluster Theory for Strong Entanglement in Nuclei,” Physical Review C (July 2023), APS. doi: 10.1103/PhysRevC.108.014307
+ +Tsai, Y.-H. M., N. Beams, and H. Anzt. “Three-Precision Algebraic Multigrid on GPUs,” Future Generation Computer Systems (July 2023), Elsevier. doi: 10.1016/j.future.2023.07.024
+ +Valentini, P., M. S. Grover, A. M. Verhoff, and N. J. Bisek. “Near-Continuum, Hypersonic Oxygen Flow over a Double Cone Simulated by Direct Simulation Monte Carlo Informed from Quantum Chemistry,” Journal of Fluid Mechanics (July 2023), Cambridge University Press. doi: 10.1017/jfm.2023.437
+ +Van den Berg, D. M. N., P. Nascimento de Lima, A. B. Knudsen, C. M. Rutter, D. Weinberg, I. Lansdorp-Vogelaar, A. G. Zauber, A. I. Hahn, F. A. Escudero, C. E. Maerzluft, A. Katsara, K. M. Kuntz, J. M. Inamodi, N. Collier, J. Ozik, L. A. van Duuren, R. van den Puttelaar, M. Harlass, C. L. Seguin, B. Davidi, C. Pineda-Antunez, E. J. Feuer, and L. de Jonge. “NordICC Trial Results in Line with Expected Colorectal Cancer Mortality Reduction after Colonoscopy: A Modeling Study,” Gastroentereology (July 2023), Elsevier. doi: 10.1053/j.gastro.2023.06.035
+ +Verma, G., S. Raskar, Z. Xie, A. M. Malik, M. Emani, and B. Chapman. “Transfer Learning Across Heterogeneous Features for Efficient Tensor Program Generation,” ExHET 23: Proceedings of the 2nd International Workshop on Extreme Heterogeneity Solutions (July 2023), ACM, pp. 1-6. doi: 10.1145/3587278.3595644
+ +Vincenti, H., T. Clark, L. Fedeli, P. Martin, A. Sainte-Marie, and N. Zaim. “Plasma Mirrors as a Path to the Schwinger Limit: Theoretical and Numerical Developments,” The European Physical Journal Special Topics (July 2023), Springer Nature. doi: 10.1140/epjs/s11734-023-00909-2
+ +Zhang, Z., I. Hermans, and A. N. Alexandrova. “Off-Stoichiometric Restructuring and Sliding Dynamics of Hexagonal Boron Nitride Edges in Conditions of Oxidative Dehydrogenation of Propane,” Journal of the American Chemical Society (July 2023), ACS. doi: 10.1021/jacs.3c04613
+ +Zhao, J., C. Bertoni, J. Young, K. Harms, V. Sarkar, and B. Videau. “HIPLZ: Enabling Performance Portability for Exascale Systems,” Concurrency and Computation: Practice and Experience (July 2023), John Wiley and Sons. doi: 10.1002/cpe.7866
+ +Ali, S., S. Calvez, P. Carns, M. Dorier, P. Ding, J. Kowalkowski, R. Latham, A. Norman, M. Paterno, R. Ross, S. Sehrish, S. Snyder, and J. Soumagne. “HEPnOS: A Specialized Data Service for High Energy Physics Analysis,” 2023 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW) (August 2023), St. Petersburg, FL, IEEE. doi: 10.1109/IPDPSW59300.2023.00108
+ +Azizi, K., M. Gori, U. Morzan, A. Hassanali, and P. Kurian. “Examining the Origins of Observed Terahertz Modes from an Optically Pumped Atomistic Model Protein in Aqueous Solution,” PNAS Nexus (August 2023), Oxford University Press. doi: 10.1093/pnasnexus/pgad257
+ +Brahlek, M., A. R. Mazza, A. Annaberdiyev, M. Chilcote, G. Rimal, G. B. Halász, A. Pham, Y.-Y. Pai, J. T. Krogel, J. Lapano, B. J. Lawrie, G. Eres, J. McChesney, T. Prokscha, A. Suter, S. Oh, J. W. Freeland, Y. Cao, J. S. Gardner, Z. Salman, R. G. Moore, P. Ganesh, and T. Z. Ward. “Emergent Magnetism with Continuous Control in the Ultrahigh-Conductivity Layered Oxide PdCoO2,” Nano Letters (August 2023), ACS. doi: 10.1021/acs.nanolett.3c01065
+ +Chen, S., F. Browne, P. Doornenbal, J. Lee, A. Obertelli, Y. Tsunoda, T. Otsuka, Y. Chazono, G. Hagen, J. D. Holt, G. R. Jansen, K. Ogata, N. Shimizu, Y. Utsuno, K. Yoshida, N. L. Achouri, H. Baba, D. Calvet, F. Château, N. Chiga, A. Corsi, M. L. Cortés, A. Delbart, J.-M. Gheller, A. Giganon, A. Gillibert, C. Hilaire, T. Isobe, T. Kobayashi, Y. Kubota, V. Lapoux, H. N. Liu, T. Motobayashi, I. Murray, H. Otsu, V. Panin, N. Paul, W. Rodriguez, H. Sakurai, M. Sasano, D. Steppenbeck, L. Stuhl, Y. L. Sun, Y. Togano, T. Uesaka, K. Wimmer, K. Yoneda, O. Aktas, T. Aumann, L. X. Chung, F. Flavigny, S. Franchoo, I. Gasparic, R.-B. Gerst, J. Gibelin, K. I. Hahn, D. Kim, T. Koiwai, Y. Kondo, P. Koseoglou, C. Lehr, B. D. Linh, T. Lokotko, M. MacCormick, K. Moschner, T. Nakamura, S. Y. Park, D. Rossi, E. Sahin, P.-A. Söderström, D. Sohler, S. Takeuchi, H. Törnqvist, V. Vaquero, V. Wagner, S. Wang, V. Werner, X. Xu, H. Yamada, D. Yan, Z. Yang, M. Yasuda, and L. Zanetti. “Level Structures of 56,58Ca Cast Doubt on a Doubly Magic 60Ca,” Physics Letters B (August 2023), Elsevier. doi: 10.1016/j.physletb.2023.138025
+ +Collier, N., J. M. Wozniak, A. Stevens, Y. Babuji, M. Binois, A. Fadikar, A. Würth, K. Chard, and J. Ozik. “Developing Distributed High-Performance Computing Capabilities of an Open Science Platform for Robust Epidemic Analysis,” 2023 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW) (August 2023), St. Petersburg, FL, IEEE. doi: 10.1109/ipdpsw59300.2023.00143
+ +Dong, B., J. L. Bez, and S. Byna. “AIIO: Using Artificial Intelligence for Job-Level and Automatic I/O Performance Bottleneck Diagnosis,” HPDC ‘23: Proceedings of the 32nd International Symposium on High-Performance Parallel and Distributed Computing (August 2023), ACM, pp. 155-167. doi: 10.1145/3588195.3592986
+ +Grover, M. S., A. M. Verhoff, P. Valentini, and N. J. Bisek. “First Principles Simulation of Reacting Hypersonic Flow over a Blunt Wedge,” Physics of Fluids (August 2023), AIP Publishing. doi: 10.1063/5.0161570
+ +Hannon, S., B. C. Whitmore, J. C. Lee, D. A. Thilker, S. Deger, E. A. Huerta, W. Wei, B. Mobasher, R. Klessen, M. Boquien, D. A. Dale, M. Chevance, K. Grasha, P. Sanchez-Blazquez, T. Williams, F. Scheuermann, B. Groves, H. Kim, J. M. D. Kruijssen, and the PHANGS-HST Team. “Star Cluster Classification Using Deep Transfer Learning with PHANGS-HST,” Monthly Notices of the Royal Astronomical Society (August 2023), Oxford University Press. doi: 10.1093/mnras/stad2238
+ +Hosseini, R., F. Simini, V. Vishwanath, R. Sivakumar, S. Shanmugavelu, Z. Chen, L. Zlotnik, M. Wang, P. Colangelo, A. Deng, P. Lassen, and S. Pathan. “Exploring the Use of Dataflow Architectures for Graph Neural Network Workloads,” High Performance Computing: ISC High Performance 2023 International Workshops (August 2023), Hamburg, Germany, ACM, pp. 648-661. doi: 10.1007/978-3-031-40843-4_48
+ +Johnson, M. S., M. Gierada, E. D. Hermes, D. H. Bross, K. Sargsyan, H. N. Najm, and J. Zádor. “Pynta─An Automated Workflow for Calculation of Surface and Gas–Surface Kinetics,” Journal of Chemical Information and Modeling (August 2023), ACS Publications. doi: 10.1021/acs.jcim.3c00948
+ +Lee, H., S. Poncé, K. Bushick, S. Hajinazar, J. Lafuente-Bartolome, J. Leveillee, C. Lian, J.-M. Lihm, F. Macheda, H. Mori, H. Paudyal, W. H. Sio, S. Tiwari, M. Zacharias, X. Zhang, N. Bonini, E. Kioupakis, E. R. Margine, and F. Giustino. “Electron–Phonon Physics from First Principles Using the EPW Code,” npj Computational Materials (August 2023), Springer Nature. doi: 10.1038/s41524-023-01107-3
+ +Lenard, B., E. Pershey, Z. Nault, and A. Rasin. “An Approach for Efficient Processing of Machine Operational Data,” Database and Expert Systems Applications: 34th International Conference, DEXA 2023 (August 2023), Penang, Malaysia, ACM, pp. 129-146. doi: 10.1007/978-3-031-39847-6_9
+ +Linker, T. M., K. Nomura, S. Fukushima, R. K. Kalia, A. Krishnamoorthy, A. Nakano, K. Shimamura, F. Shimojo, and P. Vashishta. “Induction and Ferroelectric Switching of Flux Closure Domains in Strained PbTiO3 with Neural Network Quantum Molecular Dynamics,” Nano Letters (August 2023), ACS Publications. doi: 10.1021/acs.nanolett.3c01885
+ +Lovato, A., A. Nikolakopoulos, N. Rocco, and N. Steinberg. “Lepton–Nucleus Interactions within Microscopic Approaches,” Universe (August 2023), MDPI. doi: 10.3390/universe9080367
+ +Mecham, N. J., I. A. Bolotnov, and E. L. Popov. “Quantifying HFIR Turbulence by Variable Curvature Channels,” 20th International Topical Meeting on Nuclear Reactor Thermal Hydraulics (NURETH-20) (August 2023), Washington, DC, American Nuclear Society, pp. 1194-1205. doi: 10.13182/nureth20-40044
+ +Monniot, J., F. Tessier, M. Robert, and G. Antoniu. “Supporting Dynamic Allocation of Heterogeneous Storage Resources on HPC Systems,” Concurrency and Computation: Practice and Experience (August 2023), John Wiley and Sons. doi: 10.1002/cpe.7890
+ +Roy, R. B., T. Patel, R. Liew, Y. N. Babuji, R. Chard, and D. Tiwari. “ProPack: Executing Concurrent Serverless Functions Faster and Cheaper,” HPDC ‘23: Proceedings of the 32nd International Symposium on High-Performance Parallel and Distributed Computing (August 2023), ACM, pp. 211-224. doi: 10.1145/3588195.3592988
+ +Wan, L., K. Kim, A. M. Dive, B. Wang, T. W. Heo, M. Wood, and B. C. Wood. “Multiscale Modeling of Heterogeneous Interfaces in All Solid-State Batteries,” ECS Meeting Abstracts (August 2023), IOP Publishing. doi: 10.1149/ma2023-0161045mtgabs
+ +Wang, T., and A. Burrows. “Neutrino-Driven Winds in Three-Dimensional Core-Collapse Supernova Simulations,” The Astrophysical Journal (August 2023), IOP Publishing. doi: 10.3847/1538-4357/ace7b2
+ +Ward, L., J. G. Pauloski, V. Hayot-Sasson, R. Chard, Y. Babuji, G. Sivaraman, S. Choudhury, K. Chard, R. Thakur, and I. Foster. “Cloud Services Enable Efficient AI-Guided Simulation Workflows across Heterogeneous Resources,” 2023 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW) (August 2023), St. Petersburg, FL, IEEE. doi: 10.1109/IPDPSW59300.2023.00018
+ +Xie, Z., S. Raskar, M. Emani, and V. Vishwanath. “TrainBF: High-Performance DNN Training Engine Using BFloat16 on AI Accelerators,” Euro-Par 2023: Parallel Processing (August 2023), Springer Nature, pp. 458-473. doi: 10.1007/978-3-031-39698-4_31
+ +Ahn, J., I. Hong, G. Lee, H. Shin, A. Benali, J. T. Krogel, and Y. Kwon. “Structural Stability of Graphene-Supported Pt Layers: Diffusion Monte Carlo and Density Functional Theory Calculations,” The Journal of Physical Chemistry C (September 2023), ACS. doi: 10.1021/acs.jpcc.3c03160
+ +Blum, T., P. A. Boyle, M. Bruno, D. Giusti, V. Gülpers, R. C. Hill, T. Izubuchi, Y.-C. Jang, L. Jin, C. Jung, A. Jüttner, C. Kelly, C. Lehner, N. Matsumoto, R. D. Mawhinney, A. S. Meyer, and J. T. Tsang. “Update of Euclidean Windows of the Hadronic Vacuum Polarization,” Physical Review D (September 2023), APS. doi: 10.1103/physrevd.108.054507
+ +Boëzennec, R., F. Dufossé, and G. Pallez. “Optimization Metrics for the Evaluation of Batch Schedulers in HPC,” Job Scheduling Strategies for Parallel Processing: 26th Workshop, JSSPP 2023 (September 2023), St. Petersburg, FL, ACM, pp. 97-115. doi: 10.1007/978-3-031-43943-8_5
+ +Chen, L., P.-H. Lin, T. Vanderbruggen, C. Liao, M. Emani, and B. de Supinski. “LM4HPC: Towards Effective Language Model Application in High-Performance Computing,” IWOMP 2023: OpenMP: Advanced Task-Based, Device and Compiler Programming (September 2023), Springer Nature, pp. 18-33. doi: 10.1007/978-3-031-40744-4_2
+ +Chitty-Venkata, K. T., Y. Bian, M. Emani, V. Vishwanath, and A. K. Somani. “Differentiable Neural Architecture, Mixed Precision and Accelerator Co-Search,” IEEE Access (September 2023), IEEE. doi: 10.1109/ACCESS.2023.3320133
+ +Chitty-Venkata, K. T., S. Mittal, M. Emani, V. Vishwanath, and A. K. Somani. “A Survey of Techniques for Optimizing Transformer Inference,” Journal of Systems Architecture (September 2023), Elsevier. doi: 10.1016/j.sysarc.2023.102990
+ +Haberlie, A. M., W. S. Ashley, V. A. Gensini, and A. C. Michaels. “The Ratio of Mesoscale Convective System Precipitation to Total Precipitation Increases in Future Climate Change Scenarios,” npj Climate and Atmospheric Science (September 2023), Springer Nature. doi: 10.1038/s41612-023-00481-5
+ +Himanshu, K. Chakraborty, and T. K. Patra. “Developing Efficient Deep Learning Model for Predicting Copolymer Properties,” Physical Chemistry Chemical Physics (September 2023), Royal Society of Chemistry. doi: 10.1039/d3cp03100d
+ +Isazawa, T., and J. M. Cole. “Automated Construction of a Photocatalysis Dataset for Water-Splitting Applications,” Scientific Data (September 2023), Springer Nature. doi: 10.1038/s41597-023-02511-6
+ +König, K., S. Fritzsche, G. Hagen, J. D. Holt, A. Klose, J. Lantis, Y. Liu, K. Minamisono, T. Miyagi, W. Nazarewicz, T. Papenbrock, S. V. Pineda, R. Powel, and P.-G. Reinhard. “Surprising Charge-Radius Kink in the Sc Isotopes at N=20,” Physical Review Letters (September 2023), APS. doi: 10.1103/PhysRevLett.131.102501
+ +Navrátil, P., K. Kravvaris, P. Gysbers, C. Hebborn, G. Hupin, and S. Quaglioni. “Ab Initio Investigations of A=8 Nuclei: α–α Scattering, Deformation in 8He, Radiative Capture of Protons on 7Be and 7Li and the X17 Boson,” Journal of Physics: Conference Series (September 2023), vol. 2586, IOP Publishing. doi: 10.1088/1742-6596/2586/1/012062
+ +Navrátil, P., and S. Quaglioni. “Ab Initio Nuclear Reaction Theory with Applications to Astrophysics,” Handbook of Nuclear Physics (September 2023), Springer, Singapore, pp. 1545-1590. doi: 10.1007/978-981-19-6345-2_7
+ +Nicolae, B., T. Z. Islam, R. Ross, H. Van Dam, K. Assogba, P. Shpilker, M. Titov, M. Turilli, T. Wang, O. O. Kilic, S. Jha, and L. C. Pouchard. “Building the I (Interoperability) of FAIR for Performance Reproducibility of Large-Scale Composable Workflows in RECUP,” 2023 IEEE 19th International Conference on e-Science (e-Science) (September 2023), Limassol, Cyprus, IEEE. doi: 10.1109/e-Science58273.2023.10254808
+ +Saeedizade, E., R. Taheri, and E. Arslan. “I/O Burst Prediction for HPC Clusters Using Darshan Logs,” 2023 IEEE 19th International Conference on e-Science (e-Science) (September 2023), Limassol, Cyprus, IEEE. doi: 10.1109/e-Science58273.2023.10254871
+ +Yalamanchi, K. K., S. Kommalapati, P. Pal, N. Kuzhagaliyeva, A. S. AlRamadan, B. Mohan, Y. Pei, S. M. Sarathy, E. Cenker, and J. Badra. “Uncertainty Quantification of a Deep Learning Fuel Property Prediction Model,” Applications in Energy and Combustion Science (September 2023), Elsevier. doi: 10.1016/j.jaecs.2023.100211
+ +Zhang, C., F. Gygi, and G. Galli. “Engineering the Formation of Spin-Defects from First Principles,” Nature Communications (September 2023), Springer Nature. doi: 10.1038/s41467-023-41632-9
+ +Burrows, A., D. Vartanyan, and T. Wang. “Black Hole Formation Accompanied by the Supernova Explosion of a 40 M⊙ Progenitor Star,” The Astrophysical Journal (October 2023), IOP Publishing. doi: 10.3847/1538-4357/acfc1c
+ +Chen, Y., E. M. Y. Lee, P. S. Gil, P. Ma, C. V. Amanchukwu, and J. J. de Pablo. “Molecular Engineering of Fluoroether Electrolytes for Lithium Metal Batteries,” Molecular Systems Design and Engineering (October 2023), Royal Society of Chemistry. doi: 10.1039/d2me00135g
+ +Datta, D., and M. S. Gordon. “Accelerating Coupled-Cluster Calculations with GPUs: An Implementation of the Density-Fitted CCSD(T) Approach for Heterogeneous Computing Architectures Using OpenMP Directives,” Journal of Chemical Theory and Computation (October 2023), ACS Publications. doi: 10.1021/acs.jctc.3c00876
+ +Harb, H., S. N. Elliott, L. Ward, I. T. Foster, S. J. Klippenstein, L. A. Curtiss, and R. S. Assary. “Uncovering Novel Liquid Organic Hydrogen Carriers: A Systematic Exploration of Chemical Compound Space Using Cheminformatics and Quantum Chemical Methods,” Digital Discovery (October 2023), Royal Society of Chemistry. doi: 10.1039/D3DD00123G
+ +Huang, S., and J. M. Cole. “ChemDataWriter: A Transformer-Based Toolkit for Auto-Generating Books That Summarise Research,” Digital Discovery (October 2023), Royal Society of Chemistry. doi: 10.1039/D3DD00159H
+ +Kravvaris, K., P. Navrátil, S. Quaglioni, C. Hebborn, and G. Hupin. “Ab Initio Informed Evaluation of the Radiative Capture of Protons on 7Be,” Physics Letters B (October 2023), Elsevier. doi: 10.1016/j.physletb.2023.138156
+ +Liu, X., S. Jiang, A. Vasan, A. Brace, O. Gokdemir, T. Brettin, F. Xia, I. Foster, and R. Stevens. “DrugImprover: Utilizing Reinforcement Learning for Multi-Objective Alignment in Drug Optimization,” NeurIPS 2023 Workshop on New Frontiers of AI for Drug Discovery and Development (October 2023), New Orleans, LA, Neural Information Processing Systems Foundation.
+ +Minch, P., R. Bhattarai, and T. D. Rhone. “Data-Driven Study of Magnetic Anisotropy in Transition Metal Dichalcogenide Monolayers,” Solid State Communications (October 2023), Elsevier. doi: 10.1016/j.ssc.2023.115248
+ +Vartanyan, D., and A. Burrows. “Neutrino Signatures of 100 2D Axisymmetric Core-Collapse Supernova Simulations,” Monthly Notices of the Royal Astronomical Society (October 2023), Oxford University Press. doi: 10.1093/mnras/stad2887
+ +Wan, S., A. P. Bhati, and P. V. Coveney. “Comparison of Equilibrium and Nonequilibrium Approaches for Relative Binding Free Energy Predictions,” Journal of Chemical Theory and Computation (October 2023), ACS. doi: 10.1021/acs.jctc.3c00842
+ +Zahariev, F., P. Xu, B. M. Westheimer, S. Webb, J. G. Vallejo, A. Tiwari, V. Sundriyal, M. Sosonkina, J. Shen, G. Schoendorff, M. Schlinsog, T. Sattasathuchana, K. Ruedenberg, L. B. Roskop, A. P. Rendell, D. Poole, P. Piecuch, B. Q. Pham, V. Mironov, J. Mato, S. Leonard, S. S. Leang, J. Ivanic, J. Hayes, T. Harville, K. Gururangan, E. Guidez, I. S. Gerasimov, C. Friedl, K. N. Ferreras, G. Elliott, D. Datta, D. Del Angel Cruz, L. Carrington, C. Bertoni, G. M. J. Barca, M. Alkan, and M. S. Gordon. “The General Atomic and Molecular Electronic Structure System (GAMESS): Novel Methods on Novel Architectures,” Journal of Chemical Theory and Computation (October 2023), ACS. doi: 10.1021/acs.jctc.3c00379
+ +Zyvagin, M., A. Brace, K. Hippie, Y. Deng, B. Zhang, C. O. Bohorquez, A. Clyde, B. Kale, D. Perez-Rivera, H. Ma, C. M. Mann, M. Irvin, J. G. Pauloski, L. Ward, V. Hayot-Sasson, M. Emani, S. Foreman, Z. Xie, D. Lin, M. Shukla, W. Nie, J. Romero, C. Dallago, A. Vahdat, C. Xiao, T. Gibbs, I. Foster, J. J. Davis, M. E. Papka, T. Brettin, R. Stevens, A. Anandkumar, V. Vishwanath, and A. Ramanathan. “GenSLMs: Genome-Scale Language Models Reveal SARS-CoV-2 Evolutionary Dynamics,” The International Journal of High Performance Computing Applications (October 2023), SAGE Publications. doi: 10.1177/10943420231201154
+ +Abbott, R., M. S. Albergo, A. Botev, D. Boyda, K. Cranmer, D. C. Hackett, A. G. D. G. Matthews, S. Racanière, A. Razavi, D. J. Rezende, F. Romero-López, P. E. Shanahan, and J. M. Urban. “Aspects of Scaling and Scalability for Flow-Based Sampling of Lattice QCD,” The European Physical Journal A (November 2023), Springer Nature. doi: 10.1140/epja/s10050-023-01154-w
+ +Antepara, O., S. Williams, H. Johansen, T. Zhao, S. Hirsch, P. Goyal, and M. Hall. “Performance Portability Evaluation of Blocked Stencil Computations on GPUs,” SC-W ‘23: Proceedings of the SC ‘23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis (November 2023), ACM, pp. 1007-1018. doi: 10.1145/3624062.3624177
+ +Antepara, O., S. Williams, S. Kruger, T. Bechtel, J. McClenaghan, and L. Lao. “Performance-Portable GPU Acceleration of the EFIT Tokamak Plasma Equilibrium Reconstruction Code,” SC-W ‘23: Proceedings of the SC ‘23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis (November 2023), ACM, pp. 1939-1948. doi: 10.1145/3624062.3624607
+ +Babu, A. V., T. Zhou, S. Kandel, T. Bicer, Z. Liu, W. Judge, D. J. Ching, Y. Jiang, S. Veseli, S. Henke, R. Chard, Y. Yao, E. Sirazitdinova, G. Gupta, M. V. Holt, I. T. Foster, A. Miceli, and M. J. Cherukara. “Deep Learning at the Edge Enables Real-Time Streaming Ptychographic Imaging,” Nature Communications (November 2023), Springer Nature. doi: 10.1038/s41467-023-41496-z
+ +Barik, R., S. Raskar, M. Emani, and V. Vishwanath. “Characterizing the Performance of Triangle Counting on Graphcore’s IPU Architecture,” SC-W ‘23: Proceedings of the SC ‘23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis (November 2023), ACM, pp. 1949-1957. doi: 10.1145/3624062.3624608
+ +Baughman, M., N. Hudson, R. Chard, A. Bauer, I. Foster, and K. Chard. “Tournament-Based Pretraining to Accelerate Federated Learning,” SC-W ‘23: Proceedings of the SC ‘23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis (November 2023), ACM, pp. 109-115. doi: 10.1145/3624062.3626089
+ +Brace, A., R. Vescovi, R. Chard, N. D. Saint, A. Ramanathan, N. J. Zaluzec, and I. Foster. “Linking the Dynamic PicoProbe Analytical Electron-Optical Beam Line / Microscope to Supercomputers,” SC-W ‘23: Proceedings of the SC ‘23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis (November 2023), ACM. doi: 10.1145/3624062.3624614
+ +Cendejas, M. C., O. A. P. Mellone, U. Kurumbail, Z. Zhang, J. H. Jansen, F. Ibrahim, S. Dong, J. Vinson, A. N. Alexandrova, D. Sokaras, S. R. Bare, and I. Hermans. “Tracking Active Phase Behavior on Boron Nitride during the Oxidative Dehydrogenation of Propane Using Operando X-ray Raman Spectroscopy,” Journal of the American Chemical Society (November 2023), ACS. doi: 10.1021/jacs.3c08679
+ +Chakraborty, TC, J. Wang, Y. Qian, W. Pringle, Z. Yang, and P. Xue. “Urban Versus Lake Impacts on Heat Stress and Its Disparities in a Shoreline City,” GeoHealth (November 2023), John Wiley and Sons. doi: 10.1029/2023GH000869
+ +Chen, L., X. Ding, M. Emani, T. Vanderbruggen, P.-H. Lin, and C. Liao. “Data Race Detection Using Large Language Models,” SC-W ‘23: Proceedings of the SC ‘23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis (November 2023), ACM, pp. 215-223. doi: 10.1145/3624062.3624088
+ +Chowdhury, S., F. Li, A. Stubbings, J. New, A. Garg, S. Correa, and K. Bacabac. “Bias Correction in Urban Building Energy Modeling for Chicago Using Machine Learning,” 2023 Fourth International Conference on Intelligent Data Science Technologies and Applications (IDSTA) (November 2023), Kuwai, Kuwait, IEEE. doi: 10.1109/idsta58916.2023.10317837
+ +Dharuman, G., L. Ward, H. Ma, P. V. Setty, O. Gokdemir, S. Foreman, M. Emani, K. Hippe, A. Brace, K. Keipert, T. Gibbs, I. Foster, A. Anandkumar, V. Vishwanath, and A. Ramanathan. “Protein Generation via Genome-scale Language Models with Bio-physical Scoring,” SC-W ‘23: Proceedings of the SC ‘23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis (November 2023), ACM, pp. 95-101. doi: 10.1145/3624062.3626087
+ +Ding, X., L. Chen, M. Emani, C. Liao, P.-H. Lin, T. Vanderbruggen, Z. Xie, A. Cerpa, and W. Du. “HPC-GPT: Integrating Large Language Model for High-Performance Computing,” SC-W ‘23: Proceedings of the SC ‘23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis (November 2023), ACM, pp. 951-960. doi: 10.1145/3624062.3624172
+ +Ditte, M., M. Barborini, L. M. Sandonas, and A. Tkatchenko. “Molecules in Environments: Toward Systematic Quantum Embedding of Electrons and Drude Oscillators,” Physical Review Letters (November 2023), APS. doi: 10.1103/physrevlett.131.228001
+ +Fox, D., J. M. M. Diaz, and X. Li. “A gem5 Implementation of the Sequential Codelet Model: Reducing Overhead and Expanding the Software Memory Interface,” SC-W ‘23: Proceedings of the SC ‘23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis (November 2023), ACM, pp. 839-846. doi: 10.1145/3624062.3624152
+ +Grassi, A., H. G. Rinderknecht, G. F. Swadling, D. P. Higginson, H.-S. Park, A. Spitkovsky, and F. Fiuza. “Electron Injection via Modified Diffusive Shock Acceleration in High-Mach-Number Collisionless Shocks,” The Astrophysical Journal Letters (November 2023), IOP Publishing. doi: 10.3847/2041-8213/ad0cf9
+ +Gu, C., Z. H. Sun, G. Hagen, and T. Papenbrock. “Entanglement Entropy of Nuclear Systems,” Physical Review C (November 2023), APS. doi: 10.1103/PhysRevC.108.054309
+ +Gueroudji, A., J. Bigot, B. Raffin, and R. Ross. “Dask-Extended External Tasks for HPC/ML in Transit Workflows,” SC-W ‘23: Proceedings of the SC ‘23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis (November 2023), ACM, pp. 831-838. doi: 10.1145/3624062.3624151
+ +Hossain, K., R. Balin, C. Adams, T. Uram, K. Kumaran, V. Vishwanath, T. Dey, S. Goswami, J. Lee, R. Ramer, and K. Yamada. “Demonstration of Portable Performance of Scientific Machine Learning on High Performance Computing Systems,” SC-W ‘23: Proceedings of the SC ‘23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis (November 2023), ACM, pp. 644-647. doi: 10.1145/3624062.3624138
+ +Huang, Y., S. Di, X. Yu, G. Li, and F. Cappello. “cuSZp: An Ultra-Fast GPU Error-Bounded Lossy Compression Framework with Optimized End-to-End Performance,” SC ‘23: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (November 2023), ACM, pp. 1-13. doi: 10.1145/3581784.3607048
+ +Kanhaiya, K., M. Nathanson, P. J. in ‘t Veld, C. Zhu, I. Nikiforov, E. B. Tadmor, Y. K. Choi, W. Im, R. K. Mishra, and H. Heinz. “Accurate Force Fields for Atomistic Simulations of Oxides, Hydroxides, and Organic Hybrid Materials up to the Micrometer Scale,” Journal of Chemical Theory and Computation (November 2023), ACS. doi: 10.1021/acs.jctc.3c00750
+ +Kéruzoré, F., L. E. Bleem, M. Buehlmann, J. D. Emberson, N. Frontiere, S. Habib, K. Heitmann, and P. Larsen. “Optimization and Quality Assessment of Baryon Pasting for Intracluster Gas using the Borg Cube Simulation,” The Open Journal of Astrophysics (November 2023), Maynooth Academic Publishing. doi: 10.21105/astro.2306.13807
+ +Kumari, S., A. N. Alexandrova, and P. Sautet. “Nature of Zirconia on a Copper Inverse Catalyst Under CO2 Hydrogenation Conditions,” Journal of the American Chemical Society (November 2023), ACS. doi: 10.1021/jacs.3c09947
+ +Liu, M., C. Oh, J. Liu, L. Jiang, and Y. Alexeev. “Simulating Lossy Gaussian Boson Sampling with Matrix-Product Operators,” Physical Review A (November 2023), APS. doi: 10.1103/physreva.108.052604
+ +Lykov, D., R. Shaydulin, Y. Sun, Y. Alexeev, and M. Pistoia. “Fast Simulation of High-Depth QAOA Circuits,” SC-W ‘23: Proceedings of the SC ‘23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis (November 2023), ACM. doi: 10.1145/3624062.3624216
+ +Martin, A., G. Liu, W. Ladd, S. Lee, J. Gounley, J. Vetter, S. Patel, S. Rizzi, V. Mateevitsi, J. Insley, and A. Randles. “Performance Evaluation of Heterogeneous GPU Programming Frameworks for Hemodynamic Simulations,” SC-W ‘23: Proceedings of the SC ‘23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis (November 2023), ACM, pp. 1126-1137. doi: 10.1145/3624062.3624188
+ +Mateevitsi, V. A., M. Bode, N. Ferrier, P. Fischer, J. H. Göbbert, J. A. Insley, Y.-H. Lan, M. Min, M. E. Papka, S. Patel, S. Rizzi, and J. Windgassen. “Scaling Computational Fluid Dynamics: In Situ Visualization of NekRS using SENSEI,” SC-W ‘23: Proceedings of the SC ‘23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis (November 2023), ACM, pp. 862-867. doi: 10.1145/3624062.3624159
+ +Narykov, O., Y. Zhu, T. Brettin, Y. Evrard, A. Partin, M. Shukla, P. Vasanthakumari, J. Doroshow, and R. Stevens. “Entropy-Based Regularization on Deep Learning Models for Anti-Cancer Drug Response Prediction,” SC-W ‘23: Proceedings of the SC ‘23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis (November 2023), ACM, pp. 121-122. doi: 10.1145/3624062.3624080
+ +Parraga, H., J. Hammonds, S. Henke, S. Veseli, W. Allcock, B. Côté, R. Chard, S. Narayanan, and N. Schwarz. “Empowering Scientific Discovery through Computing at the Advanced Photon Source,” SC-W ‘23: Proceedings of the SC ‘23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis (November 2023), ACM, pp. 2126-2132. doi: 10.1145/3624062.3624612
+ +Pauloski, J. G., V. Hayot-Sasson, L. Ward, N. Hudson, C. Sabino, M. Baughman, K. Chard, and I. Foster. “Accelerating Communications in Federated Applications with Transparent Object Proxies,” SC ‘23: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (November 2023), ACM. doi: 10.1145/3581784.3607047
+ +Pautsch, E., J. Li, S. Rizzi, G. K. Thiruvathukal, and M. Pantoja. “Optimized Uncertainty Estimation for Vision Transformers: Enhancing Adversarial Robustness and Performance Using Selective Classification,” SC-W ‘23: Proceedings of the SC ‘23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis (November 2023), ACM, pp. 391-394. doi: 10.1145/3624062.3624106
+ +Prince, M., D. Gürsoy, D. Sheyfer, R. Chard, B. Côté, H. Parraga, B. Frosik, J. Tischler, and N. Schwarz. “Demonstrating Cross-Facility Data Processing at Scale With Laue Microdiffraction,” SC-W ‘23: Proceedings of the SC ‘23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis (November 2023), ACM, pp. 2133-2139. doi: 10.1145/3624062.3624613
+ +Rangel, E. M., S. J. Pennycook, A. Pope, N. Frontiere, Z. Ma, and V. Madanath. “A Performance-Portable SYCL Implementation of CRK-HACC for Exascale,” SC-W ‘23: Proceedings of the SC ‘23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis (November 2023), ACM, pp. 1114-1125. doi: 10.1145/3624062.3624187
+ +Rutter, C. M., P. N. de Lima, C. E. Maerzluft, F. P. May, and C. C. Murphy. “Black-White Disparities in Colorectal Cancer Outcomes: A Simulation Study of Screening Benefit,” JNCI Monographs (November 2023), Oxford University Press. doi: 10.1093/jncimonographs/lgad019
+ +Shepard, C., and Y. Kanai. “Ion-Type Dependence of DNA Electronic Excitation in Water under Proton, α-Particle, and Carbon Ion Irradiation: A First-Principles Simulation Study,” The Journal of Physical Chemistry B (November 2023), ACS Publications. doi: 10.1021/acs.jpcb.3c05446
+ +Siefert, C. M., C. Pearson, S. L. Olivier, A. Prokopenko, J. Hu, and T. J. Fuller. “Latency and Bandwidth Microbenchmarks of US Department of Energy Systems in the June 2023 Top 500 List,” SC-W ‘23: Proceedings of the SC ‘23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis (November 2023), ACM, pp. 1298-1305. doi: 10.1145/3624062.3624203
+ +Träff, J. L., S. Hunold, I. Vardas, and N. M. Funk. “Uniform Algorithms for Reduce-Scatter and (Most) Other Collectives for MPI,” 2023 IEEE International Conference on Cluster Computing (CLUSTER) (November 2023), Santa Fe, NM, IEEE. doi: 10.1109/cluster52292.2023.00031
+ +Underwood, R. R., S. Di, S. Jin, M. H. Rahman, A. Khan, and F. Cappello. “LibPressio-Predict: Flexible and Fast Infrastructure for Inferring Compression Performance,” SC-W ‘23: Proceedings of the SC ‘23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis (November 2023), ACM, pp. 272-280. doi: 10.1145/3624062.3625124
+ +Vasan, A., T. Brettin, R. Stevens, A. Ramanathan, and V. Vishwanath. “Scalable Lead Prediction with Transformers using HPC Resources,” SC-W ‘23: Proceedings of the SC ‘23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis (November 2023), ACM, pp. 123. doi: 10.1145/3624062.3624081
+ +Veseli, S., J. Hammonds, S. Henke, H. Parraga, and N. Schwarz. “Streaming Data from Experimental Facilities to Supercomputers for Real-Time Data Processing,” SC-W ‘23: Proceedings of the SC ‘23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis (November 2023), ACM, pp. 2110-2117. doi: 10.1145/3624062.3624610
+ +Wan, S., A. P. Bhati, A. D. Wade, and P. V. Coveney. “Ensemble-Based Approaches Ensure Reliability and Reproducibility,” Journal of Chemical Information and Modeling (November 2023), ACS. doi: 10.1021/acs.jcim.3c01654
+ +Wilkins, M., H. Wang, P. Liu, B. Pham, Y. Guo, R. Thakur, P. Dinda, and N. Hardavellas. “Generalized Collective Algorithms for the Exascale Era,” 2023 IEEE International Conference on Cluster Computing (CLUSTER) (November 2023), Santa Fe, NM, IEEE. doi: 10.1109/cluster52292.2023.00013
+ +Zhang, C., B. Sun, X. Yu, Z. Xie, W. Zheng, K. A. Iskra, P. Beckman, and D. Tao. “Benchmarking and In-Depth Performance Study of Large Language Models on Habana Gaudi Processors,” SC-W ‘23: Proceedings of the SC ‘23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis (November 2023), ACM, pp. 1759-1766. doi: 10.1145/3624062.3624257
+ +Zubair, M., A. Walden, G. Nastac, E. Nielsen, C. Bauinger, and X. Zhu. “Optimization of Ported CFD Kernels on Intel Data Center GPU Max 1550 using oneAPI ESIMD,” SC-W ‘23: Proceedings of the SC ‘23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis (November 2023), ACM, pp. 1705-1712. doi: 10.1145/3624062.3624251
+ +Babbar, A., S. Ragunathan, D. Mitra, A. Dutta, and T. K. Patra. “Explainability and Extrapolation of Machine Learning Models for Predicting the Glass Transition Temperature of Polymers,” Journal of Polymer Science (December 2023), John Wiley and Sons. doi: 10.1002/pol.20230714
+ +Barwey, S., V., Shankar, V. Viswanathan, and R. Maulik, “Multiscale Graph Neural Network Autoencoders for Interpretable Scientific Machine Learning,” Journal of Computational Physics (December 2023), Elsevier. doi: 10.1016/j.jcp.2023.112537
+ +Chen, J. L., J. L. Prelesnik, B. Liang, Y. Sun, M. Bhatt, C. Knight, K. Mahesh, and J. I. Siepmann. “Large-Scale Molecular Dynamics Simulations of Bubble Collapse in Water: Effects of System Size, Water Model, and Nitrogen,” The Journal of Chemical Physics (December 2023), AIP Publishing. doi: 10.1063/5.0181781
+ +Ding, H. T., X. Gao, A. D. Hanlon, S. Mukherjee, P. Petreczky, Q. Shi, S. Syritsyn, and Y. Zhao. “Lattice QCD Predictions of Pion and Kaon Electromagnetic Form Factors at Large Momentum Transfer,” The 40th International Symposium on Lattice Field Theory (LATTICE2023) (December 2023), Batavia, IL, Sissa Medialab. doi: 10.22323/1.453.0320
+ +Duarte, J., H. Li, A. Roy, R. Zhu, E. A. Huerta, D. Diaz, P. Harris, R. Kansal, D. S. Katz, I. H. Kavoori, V. V. Kindratenko, F. Mokhtar, M. S. Neubauer, S. E. Park, M. Quinnan, R. Rusack, and Z. Zhao. “FAIR AI Models in High Energy Physics,” Machine Learning: Science and Technology (December 2023), IOP Publishing. doi: 10.1088/2632-2153/ad12e3
+ +Foreman, S., X.-Y. Jin, and J. C. Osborn. “MLMC: Machine Learning Monte Carlo for Lattice Gauge Theory,” The 40th International Symposium on Lattice Field Theory (LATTICE2023) (December 2023), Batavia, IL, Sissa Medialab. doi: 10.22323/1.453.0036
+ +Hackett, D. C., P. R. Oare, D. A. Pefkou, and P. E. Shanahan. “Gravitational Form Factors of the Pion from Lattice QCD,” Physical Review D (December 2023), APS. doi: 10.1103/physrevd.108.114504
+ +Narykov, O., Y. Zhu, T. Brettin, Y. A. Evrard, A. Partin, M. Shukla, F. Xia, A. Clyde, P. Vasanthakumari, J. H. Doroshow, and R. L. Stevens. “Integration of Computational Docking into Anti-Cancer Drug Response Prediction Models,” Cancers (December 2023), MDPI. doi: 10.3390/cancers16010050
+ +Sarkar, A., D. Lee, and U.-G. Meißner. “Floating Block Method for Quantum Monte Carlo Simulations,” Physical Review Letters (December 2023), APS. doi: 10.1103/PhysRevLett.131.242503
+ +Tian, M., E. A. Huerta, and H. Zheng. “AI Ensemble for Signal Detection of Higher Order Gravitational Wave Modes of Quasi-Circular, Spinning, Non-Precessing Binary Black Hole Mergers,” 2023 Workshop on Machine Learning and the Physical Sciences (December 2023), New Orleans, LA, Neural Information Processing Systems Foundation. doi: 10.48550/arXiv.2310.00052
+ +Wallace, B. C., A. M. Haberlie, W. S. Ashley, V. A. Gensini, and A. C. Michaelis. “Decomposing the Precipitation Response to Climate Change in Convection Allowing Simulations over the Conterminous United States,” Earth and Space Science (December 2023), John Wiley and Sons. doi: 10.1029/2023ea003094
+ +Wang, H.-H., S.-Y. Moon, H. Kim, G. Kim, W.-Y. Ah, Y. Y. Joo, and J. Cha. “Early Life Stress Modulates the Genetic Influence on Brain Structure and Cognitive Function in Children,” Heliyon (December 2023), Cell Press. doi: 10.1016/j.heliyon.2023.e23345
+ +Wildenberg, G., H. Li, V. Sampathkumar, A. Sorokina, and N. Kasthuri. “Isochronic Development of Cortical Synapses in Primates and Mice,” Nature Communications (December 2023), Springer Nature. doi: 10.1038/s41467-023-43088-3
+ +Ye, Z., F. Gygi, and G. Galli. “Raman Spectra of Electrified Si–Water Interfaces: First-Principles Simulations,” The Journal of Physical Chemistry Letters (December 2023), ACS. doi: 10.1021/acs.jpclett.3c03122
+ + +The Argonne Leadership Computing Facility (ALCF), a U.S. Department of Energy (DOE) Office of Science user facility at Argonne National Laboratory, enables breakthroughs in science and engineering by providing supercomputing and AI resources to the research community.
+ +ALCF computing resources—available to researchers from academia, industry, and government agencies—support large-scale computing projects aimed at solving some of the world’s most complex and challenging scientific problems. Through awards of computing time and support services, the ALCF enables researchers to accelerate the pace of discovery and innovation across a broad range of disciplines.
+ +As a key player in the nation’s efforts to provide the most advanced computing resources for science, the ALCF is helping to chart new directions in scientific computing through a convergence of simulation, data science, and AI methods and capabilities.
+ +Supported by the DOE’s Advanced Scientific Computing Research (ASCR) program, the ALCF and its partner organization, the Oak Ridge Leadership Computing Facility, operate leadership-class supercomputing resources that are orders of magnitude more powerful than the systems typically used for open scientific research.
+ + +100+ Users | ++ | 11–100 Users | ++ | 1–10 Users | ++ |
---|---|---|---|---|---|
California Illinois |
+ + | Colorado Georgia Florida Indiana Iowa Maryland Massachusetts Michigan Minnesota North Carolina New Jersey |
+ New Mexico New York Ohio Oregon Pennsylvania Rhode Island Tennessee Texas Virginia Washington |
+ Alabama Arizona Connecticut Delaware Idaho Kansas Louisiana Mississippi Missouri Nebraska North Dakota |
+ Nevada Oklahoma South Carolina Utah Washington D.C. West Virginia Wisconsin Wyoming |
+
The process of planning for and installing a supercomputer takes years. It includes a critical period of stabilizing the system through validation, verification, and scale-up activities, which can vary for each machine. However, unlike ALCF’s previous or current production machines, Aurora’s long ramp-up journey has also included several configuration changes and COVID-related supply chain issues.
+ +Aurora is a highly advanced system designed for various AI and scientific computing applications. It will also be used to train a one-trillion-parameter large language model for scientific research. Aurora’s architecture boasts more endpoints in the interconnect technology than any other system, and it has over 60,000 GPUs, making it the system with the largest number of GPUs in the world.
+ +In 2023, ALCF made significant progress toward realizing Aurora’s full capabilities. In June, Aurora completed the installation of its 10,624th and final blade. Shortly after, Argonne shared the results of benchmarking runs for about half of Aurora to the TOP500. These results were used in the November announcement of the world’s fastest supercomputers, where Aurora secured the second position. Once the full system goes online, its theoretical peak performance is expected to be approximately two exaflops.
+ +Some application teams participating in the DOE’s Exascale Computing Project and the ALCF’s Aurora Early Science Program have begun using Aurora to scale and optimize their applications for the system’s initial science campaigns. Soon to follow will be all the early science teams and an additional 24 INCITE research teams in 2024.
+ +This new exascale machine brings with it some more big changes. Theta, one of ALCF’s production systems, was retired on December 31, 2023. ThetaGPU will be decoupled and reconfigured to become a new system named Sophia, which will be used for AI development and as a production resource for visualization and analysis. Meanwhile, the ALCF AI Testbed will continue to make more production systems available to the research community.
+ +For more than three decades, researchers at Argonne have been developing tools and methods that connect powerful computing resources with large-scale experiments, such as the Advanced Photon Source and the DIII-D National Fusion Facility. Their work is shaping the future of inter-facility workflows by automating them and identifying ways to make these workflows reusable and adaptable for different experiments. Argonne’s Nexus effort, in which ALCF plays a key role, offers the framework for a unified platform to manage high-throughput workflows across the HPC landscape.
+ +In the following pages, you will learn more about how Nexus supports the DOE’s goal of building a broadscale Integrated Research Infrastructure (IRI) that leverages supercomputing facilities for experiment-time data analysis. The IRI will accelerate the next generation of data-intensive research by combining scientific facilities, supercomputing resources, and new data technologies like AI, machine learning, and edge computing.
+ +In 2023, we continued our commitment to education and workforce development by organizing a number of informative learning experiences and training events. As part of this effort, ALCF staff members led a pilot program called “Introduction to High-Performance Computing Bootcamp” in collaboration with other DOE labs. This was an immersive program designed for students in STEM to work on energy justice projects using computational and data science tools learned throughout the week. In a separate effort, the ALCF worked on developing the curriculum for its “Intro to AI-Driven Science on Supercomputers” training course, with the aim of adapting the content to introduce undergraduates and graduates to the basics of large language models for future course offerings.
+ +To conclude, I express my sincere gratitude to the exceptional staff, vendor partners, and program office, who have all contributed to making ALCF one of the leading scientific supercomputing facilities in the world. Each year, we take the time to share our numerous achievements with you in our Annual Report, and while there are many more exciting changes on the horizon, I truly appreciate this opportunity.
+ + +One of the most significant changes of the year was the retirement of Theta, Cooley, and the theta-fs0 storage systems. They were great systems that helped our users accomplish a lot of science. From the operations perspective, there is a silver lining in that it reduces the number of systems and makes our operational environment more uniform without them, but it is still sad to see them go.
+ +We made some significant improvements to our systems over the course of the year.
+Operationally, we continue to expand our support for DOE’s Integrated Research Infrastructure. Much of our initial work was with Argonne’s Advanced Photon Source, and while we continue to work with them, we are also collaborating with other facilities. From the operations side, we are working to make it faster and easier to create new on-demand endpoints. This includes making the endpoints more robust and easier for scientists to manage.
+ +Last, but certainly not least, the Operations team has been decisively engaged in the Aurora bring-up. We have done extensive work to assist in the stabilization efforts. We continue to work on developing software and processes to manage the gargantuan amount of logs and telemetry that the system produces. We have provided support for scheduling. Our system admins have developed extensive prolog and epilogue hooks to detect and, where possible, automatically remediate known issues on the system while the vendors work on a permanent resolution. We have also assisted in supporting the user community. Because of the NDA (Non-Disclosure Agreement) requirements, we set up a special Slack instance to facilitate discussion and have assisted in conducting training.
+ +We continue to collaborate with Altair Engineering and the OpenPBS community. We found some scale-related bugs that were making administration on Aurora slow and difficult. We worked closely with Altair and they provided patch updates very quickly and integrated those fixes into the production releases. We continued our work on porting PBS to the AI Testbed systems, but their unique hardware architectures and constraints have been challenging. However, later in the year, we were forced to table the AI system work and focus on Aurora.
+ + + +Over the past year, we made considerable progress in deploying Aurora, enhancing our AI for Science capabilities, and advancing the development of DOE’s Integrated Research Infrastructure (IRI). On the Aurora front, our team was instrumental in enabling a partial system run that earned the #2 spot on the Top500 List in November. It was also great to see Aurora’s DAOS storage system place #1 on the IO500 production list. We helped get several early science applications up and running on Aurora – some of which have scaled to 2,000 nodes with very promising performance numbers compared to other GPU-powered systems. Our team also made some notable advances with scientific visualizations, demonstrating interactive visualization capabilities using blood flow simulation data generated with the HARVEY code on Aurora hardware and producing animations from HACC cosmology simulations that ran at scale on the system.
+ +We continued to work closely with Intel to improve and scale oneAPI software, bringing many pieces into production. On Aurora, the AI for Science models driving the deployment of AI frameworks (TensorFlow, PyTorch) have achieved an average single GPU performance more than 2x faster than NVIDIA A100, driven by close collaboration between Argonne staff and Intel engineers. Other efforts included using the Argonne-developed chipStar HIP implementation for Intel GPUs to get HIP applications running on Aurora. To help support Aurora users and the broader exascale computing community in the future, we played a role in launching the DAOS Foundation, which is working to advance the use of DAOS for next-generation HPC and AI/ML workloads, and the Unified Acceleration (UXL) Foundation, which was formed to drive an open standard accelerator software ecosystem. ALCF team members also continued to contribute to the development of standards for various programming languages and frameworks, including C++, OpenCL, SYCL, and OpenMP.
+ +In the AI for science realm, we enhanced the capabilities of the ALCF AI Testbed with two new system deployments (Groq, Graphcore) and two system upgrades (Cerebras, SambaNova). With a total of four different accelerators available for open science research, we partnered with the vendors to host a series of ALCF training workshops, as well as a SC23 tutorial, that introduced each system’s hardware and software and helped researchers get started. The team published a paper on performance portability across the three major GPU vendors’ architectures at SC23, demonstrating that all three of them are good for AI for science workloads. The Intel GPU on Aurora demonstrated the best performance at the time of the study. Our staff also contributed to the development of MLCommon’s new storage performance benchmark for AI/ML workloads and submitted results using our Polaris supercomputer and Eagle file system, which demonstrated efficient I/O operations for state-of-the-art AI applications at scale. In addition, we deployed a large language model service on Sunspot and demonstrated its capabilities at Intel’s SC23 booth.
+ +Finally, our ongoing efforts to develop IRI tools and capabilities got a boost with Polaris and the launch of Argonne’s Nexus — a coordinated effort that builds on our decades of research to integrate HPC resources with experiments. We currently have workflows from the Advanced Photon Source and the DIII-D National Fusion Facility running on Polaris, as well as workflows prototyped for DOE’s Earth System Grid Federation and Fermilab’s flagship Short Baseline Neutrino Program. Our team also delivered talks to share our IRI research at the Monterey Data Conference, the Smoky Mountains Computational Sciences and Engineering Conference, Confab23, and the DOE booth at SC23. With momentum building for continued advances in our IRI activities, the Aurora deployment, and AI for science, we have a lot to look forward to in 2024.
+ + + +It was a busy year for the ALCF as we continued to make strides in deploying new systems, tools, and capabilities to support HPC- and AI-driven scientific research, while also broadening our outreach efforts to engage with new communities. In the outreach space, we partnered with colleagues at the Exascale Computing Project, NERSC, OLCF, and the Sustainable Horizons Institute to host DOE’s first “Intro to HPC Bootcamp.” With an emphasis on energy justice and workforce development, the event welcomed around 60 college students (many with little to no background in scientific computing) to use HPC for hands-on projects focused on making positive social impacts. It was very gratifying to see how engaged the students were in this immersive, week-long event. The bootcamp is a great addition to our extensive outreach efforts aimed at cultivating the next-generation computing workforce.
+ +Our ongoing efforts to develop an Integrated Research Infrastructure (IRI) also made considerable progress this year. As a member of DOE’s IRI Task Force and IRI Blueprint Activity over the past few years, I’ve had the opportunity to collaborate with colleagues across the national labs to formulate a long-term strategy for integrating computing facilities like the ALCF with data-intensive experimental and observational facilities. In 2023, we released the IRI Architecture Blueprint Activity Report, which lays out a framework for moving ahead with coordinated implementation efforts across DOE. At the same time, the ALCF continued to develop and demonstrate tools and methods to integrate our supercomputers with experimental facilities, such as Argonne’s Advanced Photon Source and the DIII-D. This year, Argonne launched the “Nexus” effort, which brings together all of the lab’s new and ongoing research activities and partnerships in this domain, ensuring they align with DOE’s broader IRI vision.
+ +We also made progress toward launching the Argonne Enterprise Registration System, a new lab-wide registration platform aimed at standardizing data collection and processing for various categories of non-employees, including facility users. In 2023, we defined system requirements and issued a request for proposals for building the platform. Ultimately, the new system will help eliminate redundant data entry, simplify registration processes for both users and staff, and enhance our reporting capabilities.
+ +As a final note on 2023, we kicked off the ALCF-4 project to plan for our next-generation supercomputer, with DOE approving the CD-0 (Critical Decision-0) mission need for the project in April. We also established the leadership team (with myself as the project director and Kevin Harms as technical director) and began conversations with vendors to discuss their technology roadmaps. We look forward to ramping up the ALCF-4 project in 2024.
+ + + +Year after year, our user community breaks new ground in using HPC and AI for science. From improving climate modeling capabilities to speeding up the discovery of new materials and advancing our understanding of complex cosmological phenomena, the research generated by ALCF users never ceases to amaze me.
+ +In 2023, we supported 18 INCITE projects and 33 ALCC projects (across two ALCC allocation cycles), as well as numerous Director’s Discretionary projects. Many of these projects were among the last to use Theta, which was retired at the end of the year. Over its 6+ year run as our production supercomputer, Theta delivered 202 million node-hours to 636 projects. The system also played a key role in bolstering our facility’s AI and data science capabilities. Theta was a remarkably productive and reliable machine that will be missed by ALCF users and staff alike.
+ +Research projects supported by ALCF computing resources produced 240 publications in 2023. You can read about several of these efforts in the science highlights section of this report, including a University of Illinois Chicago team that identified the exact reaction coordinates for a key protein mechanism for the first time; a team from the University of Dayton Research Institute and Air Force Research Laboratory that shed light on the complex thermal environments encountered by hypersonic vehicles; and an Argonne team that investigated the impact of disruptions in cancer screening caused by the COVID-19 pandemic.
+ +It was also a very exciting year for Aurora as early science teams began using the exascale system for the first time. After years of diligent work to prepare codes for the Aurora’s unique architecture, the teams were able to begin scaling and optimizing their applications on the machine. Their early performance results have been very promising, giving us a glimpse of what will be possible when teams start using the full supercomputer for their research campaigns next year.
+ + +