Researchers linked together the Blue Waters supercomputer at the University of Illinois' National Center for Supercomputing Applications (NCSA) and the MIRA supercomputer at the Argonne Leadership Computing Facility (ACLF), then transferred 1.8 PB of data 1,200 miles to a DDN Storage-provided server rack sitting in NCSA's booth on the exhibit floor of the Supercomputing '16 (SC16) conference.
The link between the computers used high-speed networking through the Department of Energy's Energy Science Network (ESnet). The researchers sought, in part, to take full advantage of the SC conference's fast SciNET infrastructure to do real science; typically it is used for demonstrations of technology rather than solving real scientific problems. The full experiment ran successfully for 24 hours without interruption and led to a valuable new cosmological data set that researchers started to analyze on the SC16 show floor.
"After over a half a year of focused collaboration aimed at advancing this framework for truly distributed high resolution scientific modelling, data transfer, and visualization, this work was demonstrated at SC16", stated David Wheeler, Lead Network Engineer at NCSA. "We transferred at a rate of almost a petabyte per day from ANL over ESnet to Blue Waters, and then from Blue Waters to a DDN Storage system in the NCSA booth, nearly filling a dedicated 100G circuit with scientific data. It was truly exciting to work with all of these great project members to achieve and demonstrate these advances in scientific understanding."
In addition to obtaining valuable scientific data, this experiment also yielded valuable insights into workflows and data transfer.
In a new approach to enable scientific breakthroughs, researchers linked together supercomputers at the Argonne Leadership Computing Facility (ALCF) and at the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign (UI). This link enabled scientists to transfer massive amounts of data and to run two different types of demanding computations in a coordinated fashion - referred to technically as a workflow.
What distinguishes the new work from typical workflows is the scale of the computation, the associated data generation and transfer and the scale and complexity of the final analysis. Researchers also tapped the unique capabilities of each supercomputer: They performed cosmological simulations on the ALCF's Mira supercomputer, and then sent huge quantities of data to UI's Blue Waters, which is better suited to perform the required data analysis tasks because of its processing power and memory balance.
For cosmology, observations of the sky and computational simulations go hand in hand, as each informs the other. Cosmological surveys are becoming ever more complex as telescopes reach deeper into space and time, mapping out the distributions of galaxies at farther and farther distances, at earlier epochs of the evolution of the universe.
The very nature of cosmology precludes carrying out controlled lab experiments, so scientists rely instead on simulations to provide a unique way to create a virtual cosmological laboratory. "The simulations that we run are a backbone for the different kinds of science that can be done experimentally, such as the large-scale experiments at different telescope facilities around the world", stated Argonne cosmologist Katrin Heitmann. "We talk about building the 'universe in the lab', and simulations are a huge component of that."
Not just any computer is up to the immense challenge of generating and dealing with datasets that can exceed many petabytes a day, according to Katrin Heitmann. "You really need high-performance supercomputers that are capable of not only capturing the dynamics of trillions of different particles, but also doing exhaustive analysis on the simulated data", she stated. "And sometimes, it's advantageous to run the simulation and do the analysis on different machines."
Typically, cosmological simulations can only output a fraction of the frames of the computational movie as it is running because of data storage restrictions. In this case, Argonne sent every data frame to NCSA as soon it was generated, allowing Heitmann and her team to greatly reduce the storage demands on the ALCF file system. "You want to keep as much data around as possible", Katrin Heitmann stated. "In order to do that, you need a whole computational ecosystem to come together: the fast data transfer, having a good place to ultimately store that data and being able to automate the whole process."
In particular, Argonne transferred the data produced immediately to Blue Waters for analysis. The first challenge was to set up the transfer to sustain the bandwidth of one petabyte per day.
Once Blue Waters performed the first pass of data analysis, it reduced the raw data - with high fidelity - into a manageable size. At that point, researchers sent the data to a distributed repository at Argonne, the Oak Ridge Leadership Computing Facility at Oak Ridge National Laboratory and the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory. Cosmologists can access and further analyze the data through a system built by researchers in Argonne's Mathematics and Computer Science Division in collaboration with Argonne's High Energy Physics Division.
Argonne and University of Illinois built one such central repository on the Supercomputing '16 conference exhibition floor in November 2016, with memory units supplied by DDN Storage. The data moved over 1,400 miles to the conference's SciNet network. The link between the computers used high-speed networking through the Department of Energy's Energy Science Network (ESnet). Researchers sought, in part, to take full advantage of the fast SciNET infrastructure to do real science; typically it is used for demonstrations of technology rather than solving real scientific problems.
"External data movement at high speeds significantly impacts a supercomputer's performance", stated Brandon George, systems engineer at DDN Storage. "Our solution addresses that issue by building a self-contained data transfer node with its own high-performance storage that takes in a supercomputer's results and the responsibility for subsequent data transfers of said results, leaving supercomputer resources free to do their work more efficiently."
The full experiment ran successfully for 24 hours without interruption and led to a valuable new cosmological data set that Katrin Heitmann and other researchers started to analyze on the SC16 show floor.
Argonne senior computer scientist Franck Cappello, who led the effort, likened the software workflow that the team developed to accomplish these goals to an orchestra. In this "orchestra", Franck Cappello said, the software connects individual sections, or computational resources, to make a richer, more complex sound.
He added that his collaborators hope to improve the performance of the software to make the production and analysis of extreme-scale scientific data more accessible. "The SWIFT workflow environment and the Globus file transfer service were critical technologies to provide the effective and reliable orchestration and the communication performance that were required by the experiment", Franck Cappello stated.
"The idea is to have data centres like we have for the commercial Cloud. They will hold scientific data and will allow many more people to access and analyze this data, and develop a better understanding of what they're investigating", stated Franck Cappello, who also holds an affiliate position at NCSA and serves as director of the international Joint Laboratory on Extreme Scale Computing, based in Illinois. "In this case, the focus was cosmology and the universe. But this approach can aid scientists in other fields in reaching their data just as well."
Argonne computer scientist Rajkumar Kettimuthu and David Wheeler, lead network engineer at NCSA, were instrumental in establishing the configuration that actually reached this performance. Maxine Brown from University of Illinois provided the Sage environment to display the analysis result at extreme resolution. Justin Wozniak from Argonne developed the whole workflow environment using SWIFT to orchestrate and perform all operations.
The Argonne Leadership Computing Facility, the Oak Ridge Leadership Computing Facility, the Energy Science Network and the National Energy Research Scientific Computing Center are DOE Office of Science User Facilities. Blue Waters is the largest leadership-class supercomputer funded by the National Science Foundation. Part of this work was funded by DOE's Office of Science.