Back to Table of contents

Primeur weekly 2017-09-04

Special

ETP4HPC to lay out roadmap for tangible HPC products within Europe ...

Focus

Primeur Magazine team unveils the largest directory of HPC organisations in the universe ...

Primeur Magazine team unveils the largest directory of HPC organisations in the universe ...

Quantum computing

First quantum annealing computer in the U.S. to have more than 2000 qubits installed and operational ...

Is the world on the brink of a computing revolution? - Quantum computing at the 5th Heidelberg Laureate Forum ...

Focus on Europe

Smart tomographic sensors control industrial processes of tomorrow ...

DeepL to launch DeepL Translator ...

Middleware

Learning Database speeds queries from hours to seconds ...

Unifying statistics, computer science, and applied mathematics ...

New theorems help robots to correct errors on-the-fly and learn from each other ...

Ace Computers delivers BeeGFS parallel file system for supercomputers ...

Embry-Riddle Aeronautical University acquires Cray CS cluster supercomputer to advance aerospace research ...

Hardware

ScaleMP completes $10 million funding round to accelerate growth ...

Optalysys and Earlham Institute demonstrate results of breakthrough optical processing for sequence alignment ...

University of Southern Mississippi gains campus-wide HPC cluster for research ...

Applications

New boarding procedures, smaller cabin size may limit infection on planes ...

Shaden Smith and Yang You announced as recipients of 2017 ACM/IEEE-CS George Michael Memorial HPC Fellowships ...

A new cosmic lab to view the Big Bang movie ...

Optical control of magnetic memory - New insights into fundamental mechanisms ...

Machine-learning earthquake prediction in lab shows promise ...

Caching system could make data centres more energy efficient ...

Supercomputing the weather with Thor ...

The Cloud

VMware and Dell EMC partner to deliver first data protection solution for VMware Cloud on AWS ...

Oracle expands IoT Cloud portfolio, enabling customers to accelerate intelligence and ROI from connected assets ...

VMware advances software to help customers modernize data centres ...

Fujitsu and VMware extend global partnership to empower organisations' digital transformations ...

VMware and AWS announce initial availability of VMware Cloud on AWS ...

VMware and Pivotal launch Pivotal Container Service (PKS) and collaborate with Google Cloud to bring Kubernetes to enterprise customers ...

A new cosmic lab to view the Big Bang movie


NCSA helped Argonne National Laboratory's Katrin Heitmann and other researchers conduct a huge data transfer experiment, culminating with transferring 1.8 PB of data to a DDN Storage-provided server rack located in 1,200 miles away in NCSA's booth at SC16 in Salt Lake City, with images displayed on screens in the NCSA booth. Robert Sisneros, NCSA Data Analysis and Visualization team leader, created the images shown here from the NCSA booth using a custom plugin developed for VisIt and the Blue Waters supercomputer in Illinois. Photo: Maxine Brown, Electronic Visualization Laboratory.
28 Aug 2017 Urbana-Champaign, Argonne - If you have ever had to wait those agonizing minutes in front of a computer for a movie or large file to load, you'll likely sympathize with the plight of cosmologists at the U.S. Department of Energy's (DOE) Argonne National Laboratory (ANL). But instead of watching TV dramas, they are trying to transfer, as fast and as accurately as possible, the huge amounts of data that make up movies of the universe - computationally demanding and highly intricate simulations of how our cosmos evolved after the Big Bang.

Researchers linked together the Blue Waters supercomputer at the University of Illinois' National Center for Supercomputing Applications (NCSA) and the MIRA supercomputer at the Argonne Leadership Computing Facility (ACLF), then transferred 1.8 PB of data 1,200 miles to a DDN Storage-provided server rack sitting in NCSA's booth on the exhibit floor of the Supercomputing '16 (SC16) conference.

The link between the computers used high-speed networking through the Department of Energy's Energy Science Network (ESnet). The researchers sought, in part, to take full advantage of the SC conference's fast SciNET infrastructure to do real science; typically it is used for demonstrations of technology rather than solving real scientific problems. The full experiment ran successfully for 24 hours without interruption and led to a valuable new cosmological data set that researchers started to analyze on the SC16 show floor.

"After over a half a year of focused collaboration aimed at advancing this framework for truly distributed high resolution scientific modelling, data transfer, and visualization, this work was demonstrated at SC16", stated David Wheeler, Lead Network Engineer at NCSA. "We transferred at a rate of almost a petabyte per day from ANL over ESnet to Blue Waters, and then from Blue Waters to a DDN Storage system in the NCSA booth, nearly filling a dedicated 100G circuit with scientific data. It was truly exciting to work with all of these great project members to achieve and demonstrate these advances in scientific understanding."

In addition to obtaining valuable scientific data, this experiment also yielded valuable insights into workflows and data transfer.

In a new approach to enable scientific breakthroughs, researchers linked together supercomputers at the Argonne Leadership Computing Facility (ALCF) and at the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign (UI). This link enabled scientists to transfer massive amounts of data and to run two different types of demanding computations in a coordinated fashion - referred to technically as a workflow.

What distinguishes the new work from typical workflows is the scale of the computation, the associated data generation and transfer and the scale and complexity of the final analysis. Researchers also tapped the unique capabilities of each supercomputer: They performed cosmological simulations on the ALCF's Mira supercomputer, and then sent huge quantities of data to UI's Blue Waters, which is better suited to perform the required data analysis tasks because of its processing power and memory balance.

For cosmology, observations of the sky and computational simulations go hand in hand, as each informs the other. Cosmological surveys are becoming ever more complex as telescopes reach deeper into space and time, mapping out the distributions of galaxies at farther and farther distances, at earlier epochs of the evolution of the universe.

The very nature of cosmology precludes carrying out controlled lab experiments, so scientists rely instead on simulations to provide a unique way to create a virtual cosmological laboratory. "The simulations that we run are a backbone for the different kinds of science that can be done experimentally, such as the large-scale experiments at different telescope facilities around the world", stated Argonne cosmologist Katrin Heitmann. "We talk about building the 'universe in the lab', and simulations are a huge component of that."

Not just any computer is up to the immense challenge of generating and dealing with datasets that can exceed many petabytes a day, according to Katrin Heitmann. "You really need high-performance supercomputers that are capable of not only capturing the dynamics of trillions of different particles, but also doing exhaustive analysis on the simulated data", she stated. "And sometimes, it's advantageous to run the simulation and do the analysis on different machines."

Typically, cosmological simulations can only output a fraction of the frames of the computational movie as it is running because of data storage restrictions. In this case, Argonne sent every data frame to NCSA as soon it was generated, allowing Heitmann and her team to greatly reduce the storage demands on the ALCF file system. "You want to keep as much data around as possible", Katrin Heitmann stated. "In order to do that, you need a whole computational ecosystem to come together: the fast data transfer, having a good place to ultimately store that data and being able to automate the whole process."

In particular, Argonne transferred the data produced immediately to Blue Waters for analysis. The first challenge was to set up the transfer to sustain the bandwidth of one petabyte per day.

Once Blue Waters performed the first pass of data analysis, it reduced the raw data - with high fidelity - into a manageable size. At that point, researchers sent the data to a distributed repository at Argonne, the Oak Ridge Leadership Computing Facility at Oak Ridge National Laboratory and the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory. Cosmologists can access and further analyze the data through a system built by researchers in Argonne's Mathematics and Computer Science Division in collaboration with Argonne's High Energy Physics Division.

Argonne and University of Illinois built one such central repository on the Supercomputing '16 conference exhibition floor in November 2016, with memory units supplied by DDN Storage. The data moved over 1,400 miles to the conference's SciNet network. The link between the computers used high-speed networking through the Department of Energy's Energy Science Network (ESnet). Researchers sought, in part, to take full advantage of the fast SciNET infrastructure to do real science; typically it is used for demonstrations of technology rather than solving real scientific problems.

"External data movement at high speeds significantly impacts a supercomputer's performance", stated Brandon George, systems engineer at DDN Storage. "Our solution addresses that issue by building a self-contained data transfer node with its own high-performance storage that takes in a supercomputer's results and the responsibility for subsequent data transfers of said results, leaving supercomputer resources free to do their work more efficiently."

The full experiment ran successfully for 24 hours without interruption and led to a valuable new cosmological data set that Katrin Heitmann and other researchers started to analyze on the SC16 show floor.

Argonne senior computer scientist Franck Cappello, who led the effort, likened the software workflow that the team developed to accomplish these goals to an orchestra. In this "orchestra", Franck Cappello said, the software connects individual sections, or computational resources, to make a richer, more complex sound.

He added that his collaborators hope to improve the performance of the software to make the production and analysis of extreme-scale scientific data more accessible. "The SWIFT workflow environment and the Globus file transfer service were critical technologies to provide the effective and reliable orchestration and the communication performance that were required by the experiment", Franck Cappello stated.

"The idea is to have data centres like we have for the commercial Cloud. They will hold scientific data and will allow many more people to access and analyze this data, and develop a better understanding of what they're investigating", stated Franck Cappello, who also holds an affiliate position at NCSA and serves as director of the international Joint Laboratory on Extreme Scale Computing, based in Illinois. "In this case, the focus was cosmology and the universe. But this approach can aid scientists in other fields in reaching their data just as well."

Argonne computer scientist Rajkumar Kettimuthu and David Wheeler, lead network engineer at NCSA, were instrumental in establishing the configuration that actually reached this performance. Maxine Brown from University of Illinois provided the Sage environment to display the analysis result at extreme resolution. Justin Wozniak from Argonne developed the whole workflow environment using SWIFT to orchestrate and perform all operations.

The Argonne Leadership Computing Facility, the Oak Ridge Leadership Computing Facility, the Energy Science Network and the National Energy Research Scientific Computing Center are DOE Office of Science User Facilities. Blue Waters is the largest leadership-class supercomputer funded by the National Science Foundation. Part of this work was funded by DOE's Office of Science.

Source: National Center for Supercomputing Applications - NCSA; Argonne National Laboratory - ANL

Back to Table of contents

Primeur weekly 2017-09-04

Special

ETP4HPC to lay out roadmap for tangible HPC products within Europe ...

Focus

Primeur Magazine team unveils the largest directory of HPC organisations in the universe ...

Primeur Magazine team unveils the largest directory of HPC organisations in the universe ...

Quantum computing

First quantum annealing computer in the U.S. to have more than 2000 qubits installed and operational ...

Is the world on the brink of a computing revolution? - Quantum computing at the 5th Heidelberg Laureate Forum ...

Focus on Europe

Smart tomographic sensors control industrial processes of tomorrow ...

DeepL to launch DeepL Translator ...

Middleware

Learning Database speeds queries from hours to seconds ...

Unifying statistics, computer science, and applied mathematics ...

New theorems help robots to correct errors on-the-fly and learn from each other ...

Ace Computers delivers BeeGFS parallel file system for supercomputers ...

Embry-Riddle Aeronautical University acquires Cray CS cluster supercomputer to advance aerospace research ...

Hardware

ScaleMP completes $10 million funding round to accelerate growth ...

Optalysys and Earlham Institute demonstrate results of breakthrough optical processing for sequence alignment ...

University of Southern Mississippi gains campus-wide HPC cluster for research ...

Applications

New boarding procedures, smaller cabin size may limit infection on planes ...

Shaden Smith and Yang You announced as recipients of 2017 ACM/IEEE-CS George Michael Memorial HPC Fellowships ...

A new cosmic lab to view the Big Bang movie ...

Optical control of magnetic memory - New insights into fundamental mechanisms ...

Machine-learning earthquake prediction in lab shows promise ...

Caching system could make data centres more energy efficient ...

Supercomputing the weather with Thor ...

The Cloud

VMware and Dell EMC partner to deliver first data protection solution for VMware Cloud on AWS ...

Oracle expands IoT Cloud portfolio, enabling customers to accelerate intelligence and ROI from connected assets ...

VMware advances software to help customers modernize data centres ...

Fujitsu and VMware extend global partnership to empower organisations' digital transformations ...

VMware and AWS announce initial availability of VMware Cloud on AWS ...

VMware and Pivotal launch Pivotal Container Service (PKS) and collaborate with Google Cloud to bring Kubernetes to enterprise customers ...