Back to Table of contents

Primeur weekly 2017-01-30

Exascale supercomputing

Co-design centres to help make next-generation exascale computing a reality ...

Quantum computing

Supercool electrons ...

D-Wave announces D-Wave 2000Q quantum computer and first system order ...

Temporal Defense Systems purchases the first D-Wave 2000Q quantum computer ...

Fast track control accelerates switching of quantum bits ...

Focus on Europe

Bright Computing teams up with SGI to co-sponsor UK HPC & Big Data event ...

When life sciences become data sciences ...

PRACE Spring School 2017 - HPC for Life Sciences, registration is open ...

ITER and BSC tighten their collaboration to simulate the process of fusion power generation ...

Netherlands eScience Center to issue annual report ...

Middleware

BSC releases PyCOMPSs version 2.0 as a PIP installable package ...

Hardware

RSC gets the highest Elite status in the Intel Solutions for Lustre Reseller Programme ...

Finding a needle in the ocean ...

GIGABYTE selects Cavium QLogic 10/25GbE FastLinQ technology to power its next generation servers ...

Setting up light paths using the SURFnet Network Dashboard ...

Minerva will calculate gravitational waves faster than the Albert Einstein Institute's previous supercomputer ...

Applications

Big Brother will have some difficulty 'watching you' in future ...

Engineers eat away at Ms. Pac-Man score with artificial player ...

Mummy visualization impresses in computer journal ...

Berkeley launches RISELab, enabling computers to make intelligent real-time decisions ...

Model sheds light on inhibitory neurons' computational role ...

Using Big Data to understand immune system responses ...

Artificial intelligence uncovers new insight into biophysics of cancer ...

PPPL scientist uncovers physics behind plasma-etching process ...

Computer-aided drug design ...

IBM expands choices for PowerAI developers with TensorFlow ...

Hussein Aluie awarded hours on supercomputer at Argonne ...

CWI develops algorithms that shorten response time of ambulance ...

A rising peptide: Supercomputing helps scientists come closer to tailoring drug molecules ...

The Cloud

Oracle expands Startup Accelerator Programme to further promote global Cloud innovation ...

Technical computing hub UberCloud receives funding from Earlybird ...

Co-design centres to help make next-generation exascale computing a reality


Simulation of turbulence inside an internal combustion engine, rendered using the advanced supercomputing resources at the Argonne Leadership Computing Facility, an Office of Science User Facility. The ability to create such complex simulations helps researchers solve some of the world’s largest, most complex problems. Image by George Giannakopoulos.
27 Jan 2017 Argonne - The next generation of supercomputers will help researchers tackle increasingly complex problems through modelling large-scale systems, such as nuclear reactors or global climate, and simulating complex phenomena, such as the chemistry of molecular interactions. In order to be successful, these systems must be able to carry out vast numbers of calculations at extreme speeds, reliably store enormous amounts of information and be able to quickly deliver this information with minimal errors. To create such a system, computer designers have to first find ways to overcome limitations in existing high-performance computing systems and develop, design and optimize new software and hardware technologies to operate at exascale.

'Exascale' refers to high-performance computing systems capable of at least a billion billion calculations per second, or a factor of 50 times faster than the nation's most powerful supercomputers in use today. Computational scientists aim to use these systems to generate new insights and accelerate discoveries in materials science, precision medicine, national security and numerous other fields.

As collaborators in four co-design centres created by the U.S. Department of Energy's (DOE) Exascale Computing Project (ECP), researchers at the DOE's Argonne National Laboratory are helping to solve some of these complex challenges and pave the way for the creation of exascale supercomputers.

The term 'co-design' describes the integrated development and evolution of hardware technologies, computational applications and associated software. In pursuit of ECP's mission to help people solve realistic application problems through exascale computing, each co-design centre targets different features and challenges relating to exascale computing.

Ian Foster, a University of Chicago professor and Argonne Distinguished Fellow, is leading a co-design centre on a mission to strengthen and optimize processes for data analysis and reduction for the exascale.

"Exascale systems will be 50 times faster than existing systems, but it would be too expensive to build out storage that would be 50 times faster as well", he stated. "This means we no longer have the option to write out more data and store all of it. And if we can't change that, then something else needs to change."

Ian Foster and other researchers in the Co-design Center for Online Data Analysis and Reduction at the Exascale (CODAR) are working to overcome the gap between computation speed and the limitations in the speed and capacity of storage by developing smarter, more selective ways of reducing data without losing important information.

There are many powerful techniques for doing data reduction, and CODAR researchers are studying various approaches.

One such approach, lossy compression, is a method whereby unnecessary or redundant information is removed to reduce overall data size. This technique is what's used to transform the detail-rich images captured on our phone camera sensors into JPEG files, which are small in size. While data is lost in the process, the most important information - the amount needed for our eyes to interpret the images clearly - is maintained, and as a result, we can store hundreds more photos on our devices.

"The same thing happens when data compression is used as a technique for scientific data reduction. The important difference here is that scientific users need to precisely control and check the accuracy of the compressed data with respect to their specific needs", stated Argonne computer scientist Franck Cappello, who is leading the data reduction team for CODAR.

Other data reduction techniques include use of summary statistics and feature extraction.

The Center for Efficient Exascale Discretizations (CEED) is working to improve another feature for exascale computing - how applications create computer models. More specifically, they're looking at the process of discretization, in which the physics of the problem is represented as a finite number of grid points that represent the model of the system.

"Determining the best layout of the grid points and representation of the model is important for rapid simulation", stated computational scientist Misun Min, the Argonne lead in CEED.

Discretization is important for computer modeling and simulation because the process enables researchers to numerically represent physical systems, like nuclear reactors, combustion engines or climate systems. How researchers discretize the systems they're studying affects the amount and speed of computation at exascale. CEED is focused particularly on high-order discretizations that require relatively few grid points to accurately represent physical systems.

"Our aim is to enable more efficient discretization while still maintaining a high level of accuracy for the researcher. Greater efficiency will help minimize the number of calculations needed, which would in turn reduce the overall size of computation, and also enable relatively fast relay of information", stated Paul Fischer, a professor at the University of Illinois at Urbana-Champaign and Argonne computational scientist involved in CEED.

Researchers behind the Co-design Center for Particle Applications (CoPA) are studying methods that model natural phenomena using particles, such as molecules, electrons or atoms. In high-performance computing, researchers can represent systems via discrete particles or smooth entities such as electromagnetic waves or sound waves; or they can combine the two techniques.

Particle methods span a wide range of application areas, including materials science, chemistry, cosmology, molecular dynamics and turbulent flows. When using particle methods, researchers characterize the interactions of particles with other particles and with their environment in terms of short-range and long-range interactions.

"The idea behind the co-design centre is that, instead of everyone bringing their own specialized methods, we identify a set of building blocks, and then find the right way to deal with the common problems associated with these methods on the new supercomputers", Salman Habib, the Argonne lead in CoPA and a senior member of the Kavli Institute for Cosmological Physics at the University of Chicago, stated.

"Argonne's collaboration in this effort is in methods for long-range particle interactions as well as speeding up codes for short-range interactions; we work hard on what is needed to make codes run fast", he stated.

The Block-structured AMR Co-design Center focuses on making computation more efficient using a technique known as adaptive mesh refinement, or AMR.

AMR allows an application to achieve higher level of precision at specific points or locations of interest within the computational domain and lower levels of precision elsewhere.

In other words, AMR helps to focus the computing power where it is most effective to get the most precise calculations for lowest cost.

"Without AMR, calculations would require so much more resources and time", stated Anshu Dubey, the Argonne lead in the Block-Structured AMR Center and a fellow of the Computation Institute. "AMR helps researchers to focus the computational resources on features of interest in their applications while enabling efficiency in computing."

AMR is already used in applications such as combustion, astrophysics and cosmology; now researchers in the Block-Structured AMR co-design center are focused on enhancing and augmenting it for future exascale platforms.
Source: Argonne National Laboratory

Back to Table of contents

Primeur weekly 2017-01-30

Exascale supercomputing

Co-design centres to help make next-generation exascale computing a reality ...

Quantum computing

Supercool electrons ...

D-Wave announces D-Wave 2000Q quantum computer and first system order ...

Temporal Defense Systems purchases the first D-Wave 2000Q quantum computer ...

Fast track control accelerates switching of quantum bits ...

Focus on Europe

Bright Computing teams up with SGI to co-sponsor UK HPC & Big Data event ...

When life sciences become data sciences ...

PRACE Spring School 2017 - HPC for Life Sciences, registration is open ...

ITER and BSC tighten their collaboration to simulate the process of fusion power generation ...

Netherlands eScience Center to issue annual report ...

Middleware

BSC releases PyCOMPSs version 2.0 as a PIP installable package ...

Hardware

RSC gets the highest Elite status in the Intel Solutions for Lustre Reseller Programme ...

Finding a needle in the ocean ...

GIGABYTE selects Cavium QLogic 10/25GbE FastLinQ technology to power its next generation servers ...

Setting up light paths using the SURFnet Network Dashboard ...

Minerva will calculate gravitational waves faster than the Albert Einstein Institute's previous supercomputer ...

Applications

Big Brother will have some difficulty 'watching you' in future ...

Engineers eat away at Ms. Pac-Man score with artificial player ...

Mummy visualization impresses in computer journal ...

Berkeley launches RISELab, enabling computers to make intelligent real-time decisions ...

Model sheds light on inhibitory neurons' computational role ...

Using Big Data to understand immune system responses ...

Artificial intelligence uncovers new insight into biophysics of cancer ...

PPPL scientist uncovers physics behind plasma-etching process ...

Computer-aided drug design ...

IBM expands choices for PowerAI developers with TensorFlow ...

Hussein Aluie awarded hours on supercomputer at Argonne ...

CWI develops algorithms that shorten response time of ambulance ...

A rising peptide: Supercomputing helps scientists come closer to tailoring drug molecules ...

The Cloud

Oracle expands Startup Accelerator Programme to further promote global Cloud innovation ...

Technical computing hub UberCloud receives funding from Earlybird ...