Back to Table of contents

Primeur weekly 2018-03-05

Quantum computing

Want more efficient simulators? Store time in a quantum superposition ...

Experimentally demonstrated a toffoli gate in a semiconductor three-qubit system ...

Individual quantum dots imaged in 3D for first time ...

Artificial intelligence techniques reconstruct mysteries of quantum systems ...

Majorana runners go long range: New topological phases of matter unveiled ...

Focus on Europe

University of Groningen to organize Second Information Universe Conference ...

IEEE eScience 2018 calls for contributions ...

Harvey Meyer is awarded an ERC Consolidator Grant for fundamental calculations on strong interaction effects ...

BSC presents SuperGeek, a mascot to bring supercomputers closer to the youngest ...

Call for Participation to European Forum about "Shaping Europe's Digital Future - HPC for Extreme Scale Scientific and Industrial Applications" ...

Data management and computing infrastructure procurement broadly serves Finnish research ...

Hardware

High demand from commercial customers to boost growth in global supercomputer market ...

OCF deploys UK academia's first IBM POWER9 systems ...

Niagara is Canada's most powerful research supercomputer fuelling Canadian innovation and discovery ...

CENIC recognizes UCSC's Hyades supercomputer cluster connection ...

CoolIT Systems reports 60% revenue growth in 2017 ...

SPEC offers HPG benchmarks free of charge to qualified non-profit organisations worldwide ...

Applications

Supercomputer model reveals how sticky tape makes graphene ...

Concertio launches Optimizer Studio to help performance engineers and IT professionals achieve peak system performance ...

Sandia researcher Jacqueline Chen elected to National Academy of Engineering ...

Oak Ridge National Laboratory uses supercomputers to simulate radiation transport and to understand the dynamic interactions among ions, solids and liquids ...

Mining hardware helps scientists gain insight into silicon nanoparticles ...

Can strongly lensed type 1a supernovae resolve cosmology's biggest controversy? ...

Give your research a boost at the SURF Research Bootcamp ...

TOP500

Supercomputing under a new lens: A Sandia-developed benchmark re-ranks top computers ...

The Cloud

Alibaba Cloud launches Cloud and AI solutions in Europe including bare metal HPC services ...

KIT helps build the European Open Science Cloud ...

Supercomputing under a new lens: A Sandia-developed benchmark re-ranks top computers


TOP500 LINPACK and HPCG charts of the fastest supercomputers of 2017. The rearranged order and drastic reduction in estimated speed for the HPCG benchmarks are the result of a different method of testing modern supercomputer programmes. Image courtesy of Sandia National Laboratories.
27 Feb 2018 Albuquerque - A Sandia National Laboratories software programme now installed as an additional test for the widely observed TOP500 supercomputer challenge has become increasingly prominent. The programme's full name - High Performance Conjugate Gradients, or HPCG - doesn't come trippingly to the tongue, but word is seeping out that this relatively new benchmarking programme is becoming as valuable as its venerable partner - the High Performance LINPACK programme - which some say has become less than satisfactory in measuring many of today's computational challenges.

"The LINPACK programme used to represent a broad spectrum of the core computations that needed to be performed, but things have changed", stated Sandia researcher Mike Heroux, who created and developed the HPCG programme. "The LINPACK programme performs compute-rich algorithms on dense data structures to identify the theoretical maximum speed of a supercomputer. Today's applications often use sparse data structures, and computations are leaner."

The term "sparse" means that a matrix under consideration has mostly zero values. "The world is really sparse at large sizes", stated Mike Heroux. "Think about your social media connections: there may be millions of people represented in a matrix, but your row - the people who influence you - are few. So, the effective matrix is sparse. Do other people on the planet still influence you? Yes, but through people close to you."

Similarly, for a scientific problem whose solution requires billions of equations, most of the matrix coefficients are zero. For example, when measuring pressure differentials in a 3D mesh, the pressure on each node is directly dependent on its neighbours' pressures. The pressure in faraway places is represented through the node's near neighbours. "The cost of storing all matrix terms, as the LINPACK programme does, becomes prohibitive, and the computational cost even more so", stated Mike Heroux. A computer may be very fast in computing with dense matrices, and thus score highly on the LINPACK test, but in practical terms the HPCG test is more realistic.

To better reflect the practical elements of current supercomputing application programmes, Mike Heroux developed HPCG's preconditioned iterative method for solving systems containing billions of linear equations and billions of unknowns. "Iterative" means the programme starts with an initial guess to the solution, and then computes a sequence of improved answers. Preconditioning uses other properties of the problem to quickly converge to an acceptably close answer.

"To solve the problems we need to for our mission, which might range from a full weapons simulation to a wind farm, we need to describe physical phenomena to high fidelity, such as the pressure differential of a fluid flow simulation", stated Mike Heroux. "For a mesh in a 3D domain, you need to know at each node on the grid the relations to values at all the other nodes. A preconditioner makes the iterative method converge more quickly, so a multigrid preconditioner is applied to the method at each iteration."

Supercomputer vendors like NVIDIA Corp., Fujitsu Ltd., IBM, Intel Corp. and Chinese companies write versions of HPCG's programme that are optimal for their platform. While it might seem odd for students to modify a test to suit themselves, it's clearly desirable for supercomputers of various designs to personalize the test, as long as each competitor touches all the agreed-upon calculation bases.

"We have checks in the code to detect optimizations that are not permitted under published benchmark policy", stated Mike Heroux.

On the HPCG TOP500 list, the Sandia and Los Alamos National Laboratory supercomputer Trinity has risen to no. 3, and is the top Department of Energy system. Trinity is no. 7 overall in the LINPACK ranking. HPCG better reflects the Trinity design choices.

Mike Heroux said he wrote the base HPCG code 15 years ago, originally as a teaching code for students and colleagues who wanted to learn the anatomy of an application that uses scalable sparse solvers. Jack Dongarra and Piotr Luszczek of the University of Tennessee have been essential collaborators on the HPCG project. In particular, Jack Dongarra, whose visibility in the high-performance computing community is unrivaled, has been a strong promoter of HPCG.

"His promotional contributions are essential", stated Mike Heroux. "People respect Jack's knowledge and it helped immensely in spreading the word. But if the programme wasn't solid, promotion alone wouldn't be enough."

Mike Heroux invested his time in developing HPCG because he had a strong desire to better assure the U.S. stockpile's safety and effectiveness. The supercomputing community needed a new benchmark that better reflected the needs of the national security scientific computing community.

"I had worked at Cray Inc. for 10 years before joining Sandia in '98", he stated, "when I saw the algorithmic work I cared about moving to the labs for the Accelerated Strategic Computing Initiative (ASCI). When the US decided to observe the Comprehensive Nuclear Test Ban Treaty, we needed high-end computing to better ensure the nuclear stockpile's safety and effectiveness. I thought it was a noble thing, that I would be happy to be part of it, and that my expertise could be applied to develop next-generation simulation capabilities. ASCI was the big new project in the late 1990s if I wanted to do something meaningful in my area of research and development."

Mike Heroux is now director of software technology for the Department of Energy's Exascale Computing Project. There, he works to harmonize the computing work of the DOE national labs - Oak Ridge, Argonne, Lawrence Berkeley, Pacific Northwest, Brookhaven and Fermi, along with the three National Nuclear Security Administration labs.

"Today, we have an opportunity to create an integrated effort among the national labs", stated Mike Heroux. "We now have daily forums at the project level, and the people I work with most closely are people from the other labs. Because the Exascale Computing Project is integrated, we have to deliver software to the applications and the hardware at all labs. The Department of Energy's attempt at a multi-lab, multi-university project gives an organisational structure for us to work together as a cohesive unit so that software is delivered to fit the key applications."

Among Mike Heroux's achievements, he served for six years as editor-in-chief of ACM's Transactions on Mathematical Software. He is a senior scientist at Sandia.
Source: Sandia National Laboratories

Back to Table of contents

Primeur weekly 2018-03-05

Quantum computing

Want more efficient simulators? Store time in a quantum superposition ...

Experimentally demonstrated a toffoli gate in a semiconductor three-qubit system ...

Individual quantum dots imaged in 3D for first time ...

Artificial intelligence techniques reconstruct mysteries of quantum systems ...

Majorana runners go long range: New topological phases of matter unveiled ...

Focus on Europe

University of Groningen to organize Second Information Universe Conference ...

IEEE eScience 2018 calls for contributions ...

Harvey Meyer is awarded an ERC Consolidator Grant for fundamental calculations on strong interaction effects ...

BSC presents SuperGeek, a mascot to bring supercomputers closer to the youngest ...

Call for Participation to European Forum about "Shaping Europe's Digital Future - HPC for Extreme Scale Scientific and Industrial Applications" ...

Data management and computing infrastructure procurement broadly serves Finnish research ...

Hardware

High demand from commercial customers to boost growth in global supercomputer market ...

OCF deploys UK academia's first IBM POWER9 systems ...

Niagara is Canada's most powerful research supercomputer fuelling Canadian innovation and discovery ...

CENIC recognizes UCSC's Hyades supercomputer cluster connection ...

CoolIT Systems reports 60% revenue growth in 2017 ...

SPEC offers HPG benchmarks free of charge to qualified non-profit organisations worldwide ...

Applications

Supercomputer model reveals how sticky tape makes graphene ...

Concertio launches Optimizer Studio to help performance engineers and IT professionals achieve peak system performance ...

Sandia researcher Jacqueline Chen elected to National Academy of Engineering ...

Oak Ridge National Laboratory uses supercomputers to simulate radiation transport and to understand the dynamic interactions among ions, solids and liquids ...

Mining hardware helps scientists gain insight into silicon nanoparticles ...

Can strongly lensed type 1a supernovae resolve cosmology's biggest controversy? ...

Give your research a boost at the SURF Research Bootcamp ...

TOP500

Supercomputing under a new lens: A Sandia-developed benchmark re-ranks top computers ...

The Cloud

Alibaba Cloud launches Cloud and AI solutions in Europe including bare metal HPC services ...

KIT helps build the European Open Science Cloud ...