Back to Table of contents

Primeur weekly 2018-07-09

Focus on Europe

HPE Helps EPFL Blue Brain project unlock the secrets of the brain ...

NVIDIA and Luxembourg Government Announce Cooperation on Artificial Intelligence and High Performance Computing  ...

Hardware

Green Revolution Cooling Now Doing Business as GRC ...

Samsung Foundry and Arm expand collaboration to drive HPC Solutions ...

Huawei AI Fabric Ultra-High-Speed Ethernet Solution Passes EANTC's High-Performance Data Center Test ...

Princeton Research Computing introduces the University's newest TIGER supercomputer ...

Newcastle's new supercomputer - called Rocket - installed ...

Applications

NNSA awards $10 million center grant to Texas A&M-led consortium ...

Stem cell therapy drug may protect against smoke-related COPD symptoms ...

Atos launches the most comprehensive AI software suite available on the market to simplify and accelerate adoption ...

High performance nitride semiconductor for environmentally friendly photovoltaics ...

Berkeley Lab Team Wins Data-Driven Scavenger Hunt for Simulated Nuclear Materials ...

NSF awards more than $150 million to early career researchers in engineering and computer science ...

We Make The City Festival Workshop: “Audit the Algorithm” ...

Data mining the law ...

Copper miners can slash their energy and water use for every tonne of the metal produced thanks to a breakthrough ore sorting analyser developed by CSIRO ...

Fermilab computing experts bolster NOvA evidence: 1 million compute cores consumed on NERSC Cori supercomputer ...

The CesgaHack returns in September to help scientists to accelerate their applications ...

The Cloud

The Cloudifacturing Programme to Distribute 735.000 euro to the European Industry for HPC in the Cloud Apllications ...

Verne Global brings sustainably-powered HPC to the G-Cloud 10 marketplace ...

Fermilab computing experts bolster NOvA evidence: 1 million compute cores consumed on NERSC Cori supercomputer

At the Neutrino 2018 conference, Fermilab’s NOvA neutrino experiment announced that it had seen strong evidence of muon antineutrinos oscillating into electron antineutrinos over long distances. 3 Jul 2018 Berekely - The NOvA neutrino experiment, in collaboration with the Department of Energy’s Scientific Discovery through Advanced Computing (SciDAC-4) program and the HEPCloud program at DOE’s Fermi National Accelerator Laboratory, was able to perform the largest-scale analysis ever to support the recent evidence of antineutrino oscillation, a phenomenon that may hold clues to how our universe evolved. Using Cori, the newest supercomputer at the National Energy Research Scientific Computing Center (NERSC), located at Lawrence Berkeley National Laboratory, NOvA used over 1 million computing cores, or CPUs, between May 14 and 15 and over a short timeframe one week later.

This is the largest number of CPUs ever used concurrently over this duration - about 54 hours - for a single high-energy physics experiment. This unprecedented amount of computing enabled scientists to carry out some of the most complicated techniques used in neutrino physics, allowing them to dig deeper into the seldom seen interactions of neutrinos. This Cori allocation was more than 400 times the amount of Fermilab computing allocated to the NOvA experiment and 50 times the total computing capacity at Fermilab allocated for all of its rare-physics experiments. A continuation of the analysis was performed on NERSC’s Cori and Edison supercomputers one week later. In total, nearly 35 million core-hours were consumed by NOvA in the 54-hour period.

“The special thing about NERSC is that it enabled NOvA to do the science at a new level of precision, a much finer resolution with greater statistical accuracy within a finite amount of time,” said Andrew Norman, NOvA physicist at Fermilab. “It facilitated doing analysis of real data coming off the detector at a rate 50 times faster than that achieved in the past. The first round of analysis was done within 16 hours. Experimenters were able to see what was coming out of the data, and in less than six hours everyone was looking at it. Without these types of resources, we, as a collaboration, could not have turned around results as quickly and understood what we were seeing.”

The experiment presented the latest finding from the recently collected data at the Neutrino 2018 conference in Germany on June 4.

“The speed with which NERSC allowed our analysis team to run sophisticated and intense calculations needed to produce our final results has been a game-changer,” said Fermilab scientist Peter Shanahan, NOvA co-spokesperson. “It accelerated our time-to-results on the last step in our analysis from weeks to days, and that has already had a huge impact on what we were able to show at Neutrino 2018.”

In addition to the state-of-the-art NERSC facility, NOvA relied on work done within the SciDAC HEP Data Analytics on HPC (high-performance computers) project and the Fermilab HEPCloud facility. Both efforts are led by Fermilab scientific computing staff, and both worked together with researchers at NERSC to be able to support NOvA’s antineutrino oscillation evidence.

The current standard practice for Fermilab experimenters is to perform similar analyses using less complex calculations through a combination of both traditional high-throughput computing and the distributed computing provided by Open Science Grid, a national partnership between laboratories and universities for data-intensive research. These are substantial resources, but they use a different model: Both use a large amount of computing resources over a long period of time. For example, some resources are offered only at a low priority, so their use may be preempted by higher-priority demands. But for complex, time-sensitive analyses such as NOvA’s, researchers need the faster processing enabled by modern, high-performance computing techniques.

SciDAC-4 is a DOE Office of Science program that funds collaboration between experts in mathematics, physics and computer science to solve difficult problems. The HEP on HPC project was funded specifically to explore computational analysis techniques for doing large-scale data analysis on DOE-owned supercomputers. Running the NOvA analysis at NERSC, the mission supercomputing facility for the DOE Office of Science, was a task perfectly suited for this project. Fermilab’s Jim Kowalkowski is the principal investigator for HEP on HPC, which also has collaborators from DOE’s Argonne National Laboratory, Berkeley Lab, University of Cincinnati and Colorado State University.

"This analysis forms a kind of baseline. We’re just ramping up, just starting to exploit the other capabilities of NERSC at an unprecedented scale," Kowalkowski said.

The project's goal for its first year is to take compute-heavy analysis jobs like NOvA’s and enable it on supercomputers. That means not just running the analysis, but also changing how calculations are done and learning how to revamp the tools that manipulate the data, all in an effort to improve techniques used for doing these analyses and to leverage the full computational power and unique capabilities of modern high-performance computing facilities. In addition, the project seeks to consume all computing cores at once to shorten that timeline.

The Fermilab HEPCloud facility provides cost-effective access to compute resources by optimizing usage across all available types and elastically expanding the resource pool on short notice by, for example, renting temporary resources on commercial clouds or using high-performance computers. HEPCloud enables NOvA and physicists from other experiments to use these compute resources in a transparent way.

For this analysis, "NOvA experimenters didn't have to change much in terms of business as usual," said Burt Holzman, HEPCloud principal investigator. "With HEPCloud, we simply expanded our local on-site-at-Fermilab facilities to include Cori and Edison at NERSC."

Source: Fermilab

Back to Table of contents

Primeur weekly 2018-07-09

Focus on Europe

HPE Helps EPFL Blue Brain project unlock the secrets of the brain ...

NVIDIA and Luxembourg Government Announce Cooperation on Artificial Intelligence and High Performance Computing  ...

Hardware

Green Revolution Cooling Now Doing Business as GRC ...

Samsung Foundry and Arm expand collaboration to drive HPC Solutions ...

Huawei AI Fabric Ultra-High-Speed Ethernet Solution Passes EANTC's High-Performance Data Center Test ...

Princeton Research Computing introduces the University's newest TIGER supercomputer ...

Newcastle's new supercomputer - called Rocket - installed ...

Applications

NNSA awards $10 million center grant to Texas A&M-led consortium ...

Stem cell therapy drug may protect against smoke-related COPD symptoms ...

Atos launches the most comprehensive AI software suite available on the market to simplify and accelerate adoption ...

High performance nitride semiconductor for environmentally friendly photovoltaics ...

Berkeley Lab Team Wins Data-Driven Scavenger Hunt for Simulated Nuclear Materials ...

NSF awards more than $150 million to early career researchers in engineering and computer science ...

We Make The City Festival Workshop: “Audit the Algorithm” ...

Data mining the law ...

Copper miners can slash their energy and water use for every tonne of the metal produced thanks to a breakthrough ore sorting analyser developed by CSIRO ...

Fermilab computing experts bolster NOvA evidence: 1 million compute cores consumed on NERSC Cori supercomputer ...

The CesgaHack returns in September to help scientists to accelerate their applications ...

The Cloud

The Cloudifacturing Programme to Distribute 735.000 euro to the European Industry for HPC in the Cloud Apllications ...

Verne Global brings sustainably-powered HPC to the G-Cloud 10 marketplace ...