Back to Table of contents

Primeur weekly 2016-10-10

Exascale supercomputing

The incredible shrinking particle accelerator ...

Brookhaven Lab to play major role in 2 DOE exascale computing application projects ...

Quantum computing

More stable qubits in perfectly normal silicon ...

Focus on Europe

RSC supercomputers go West ...

Hardware

Allinea tools play vital role in advancing computational research at the VSC, Austria's largest HPC facility ...

Smallest transistor ever ...

Turning to the brain to reboot computing ...

Complex materials can self-organize into circuits, may form basis for multifunction chips ...

Wireless data centre on a chip aims to cut energy use ...

Adapteva announces 28nm 64-core Epiphany-IV microprocessor chip ...

SGI introduces unique scale-out solution for SAP HANA that protects investments when moving to real-time business ...

Applications

Clemson University scientists receive $1.8 million grant to combat Type 2 diabetes ...

Climate change intensifies night-time storms over Lake Victoria ...

Computer simulations explore how Alzheimer's disease starts ...

Rice University lab explores cement's crystalline nature to boost concrete performance ...

Rice University researchers say 2D boron may be best for flexible electronics ...

Large animals, such as the imperious African elephant, most vulnerable to impact of human expansion ...

Computer simulation finds dangerous molecule activity for ageing ...

Tornadogenesis ...

As hurricane heads up coast, a RENCI supercomputer swings into action ...

New drug candidate may reduce deficits in Parkinson's disease ...

XSEDE allocations awarded to 155 research teams across U.S. ...

OSC part of NSF-funded consortium for advancing research computing practices ...

NCSA awarded NSF grant to expand computational science education in food, energy, and water ...

Crosstalk analysis of biological networks for improved pathway annotation ...

The Cloud

Nimbix collaborates with IBM and NVIDIA to launch powerful GPU Cloud offering ...

The incredible shrinking particle accelerator


Laser wakefield particle accelerators offer the prospect of less costly and much smaller accelerators. Using the waves created by a laser shot through plasma they "surf" particles to higher speeds or energies. This simulation shows how a single electron rides a wave, helping researchers build and refine these new machines. Credit: Jean-Luc Vay, Lawrence Berkeley National Laboratory.
5 Oct 2016 Berkeley - Particle accelerators are on the verge of transformational breakthroughs - and advances in computing power and techniques are a big part of the reason.

Long valued for their role in scientific discovery and in medical and industrial applications such as cancer treatment, food sterilization and drug development, particle accelerators, unfortunately, occupy a lot of space and carry hefty price tags. The Large Hadron Collider at CERN in France and Switzerland, for example - the world's largest and most powerful particle accelerator - has a circumference of 17 miles and cost $10 billion to build. Even smaller accelerators, such as those used in medical centres for proton therapy, need large spaces to accommodate the hardware, power supplies and radiation shielding. Such treatment facilities typically fill a city block and cost hundreds of millions of dollars to build.

But efforts are under way to make this technology more affordable and accessible by shrinking both the size and the cost without losing the capability. One of the most exciting developments is the plasma accelerator, which uses lasers or particle beams rather than radio-frequency waves to generate the accelerating field. Researchers have already shown the potential for laser plasma acceleration to yield significantly more-compact accelerators. But further development is needed before these devices - envisioned as almost literally "tabletop" in many applications - make their way into everyday use.

This is where advanced visualization tools and supercomputers such as the Edison and Cori supercomputers at Lawrence Berkeley National Laboratory's National Energy Research Scientific Computing Center (NERSC) come in.

"To take full advantage of the societal benefits of particle accelerators, game-changing improvements are needed in the size and cost of accelerators, and plasma-based particle accelerators stand apart in their potential for these improvements", stated Jean-Luc Vay, a senior physicist in Berkeley Lab's Accelerator Technology and Applied Physics Division (ATAP).

Jean-Luc Vay is leading a particle accelerator modelling project as part of the NESAP program at NERSC and is the principal investigator on one of the new exascale computing projects sponsored by the U.S. Department of Energy (DOE). "Turning this from a promising technology into a mainstream scientific tool depends critically on large-scale, high-performance, high-fidelity modelling of complex processes that develop over a wide range of space and time scales", he stated.

Jean-Luc Vay and a team of mathematicians, computer scientists and physicists are working to do just that by developing software tools that can facilitate simulating, analyzing and visualizing the increasingly large datasets produced during particle accelerator studies.

Accelerator modelling is an opportunity to help lead the way to exascale applications, noted ATAP Division Director Wim Leemans. "We've spent years preparing for this opportunity", he stated, pointing to the already widespread use of modelling in accelerator design and the tradition of collaboration between physics and computing experts that has been a hallmark of ATAP's modelling work.

"One of the driving factors in our research is the transition to exascale and how data visualization is changing", explained Burlen Loring, a computer systems engineer who is part of the collaboration, along with Oliver Rübel, David Grote, Remi Lehe, Stepan Bulanov and Wes Bethel, all of Berkeley Lab, and Henri Vincenti, a Berkeley Lab postdoctoral researcher from CEA in France. "With exascale systems, traditional visualization becomes prohibitive as the simulation get larger and the machines get larger - storing all the data doesn't work and the file systems and data bandwidth rates aren't keeping up with the compute capacity."

Now, in a paper published June 9 inIEEE Computer Graphics and Applications(IEEE CG&A), the team describes a new approach to this challenge: WarpIV. WarpIV is a plasma and accelerator simulation, data visualization and analysis toolkit that marries two software tools already widely used in high energy physics: Warp, an advanced particle-in-cell simulation framework, and VisIt, a 3D scientific visualization application that supports most common visualization techniques. Together, they give users the ability to perform in situ visualization and analysis of their particle accelerator simulations at scale - that is, while the simulations are still running and using the same high performance computing resources - thus reducing memory usage and saving computer time.

"We have this push to transition a significant portion of our visualization work over to the in situ domain", Burlen Loring stated. "This work is a step in that direction. It is our first take on in situ for laser plasma accelerators and our first chance to use it on a real science problem."

A primary function of WarpIV is to manage and control the end-to-end, integrated simulation and in situ visualization and analysis workflow. To achieve this, WarpIV supports four main modes of operation - batch, monitoring, interactive and prompt - each of which in turn supports a different approach to in situ scientific discovery. WarpIV also uses a factory pattern design to define simulation models, which allows users to create new simulation and in situ analysis models in a self-contained fashion; and Python-based visualization and analysis scripts.

"One of the design factors that will make it easy for scientists to use WarpIV is the ability to use Python scripts that are autogenerated in VisIt", Burlen Loring explained. "The scientist takes a representative dataset before they make their runs and comes up with visualization scripts. Then they open the representative dataset in VisIt and use the recording feature to automatically record their actions into a Python script. Then WarpIV takes these scripts and runs them in the in situ environment."

Another key feature of WarpIV is its integrated analytics - notably, filtered particle species, which enable users to pick out particular features of interest from the hundreds of millions of particles required for accurate simulation.

"Very often when you do a visualization, particularly in situ, you want to minimize how much time you spend on it, and you can do this by focusing on particular features", Oliver Rübel explained. "In this case, for example, you need large numbers of particles to simulate the process, but the features you are interested in, such as the beam that is extracted from the background plasma, are going to be much smaller than that. So finding these features and doing the analysis while the simulation is running, this is what we call filtered species. It is a mechanism we developed not just to do plots, but to find what it is you want to plot."

WarpIV, which Oliver Rübel initially prototyped in 2013, grew out of a collaboration between two DOE SciDAC projects: Scalable Data Management, Analysis and Visualization (SDAV) and Community Project for Accelerator Science and Simulation (COMPASS) programmes. The work was also subsequently supported by DOE's Consortium for Advanced Modeling of Particle Accelerators (CAMPA) programme.

The WarpIV toolkit, which continues to undergo development, was officially rolled out in June 2016 and is available via bitbucket. Initial testing has yielded positive results in terms of scalability, performance, usability and proven impact on science.

For example, in the research that resulted in the IEEE CG&A paper, the team ran a series of ion accelerator simulations in 2D and 3D to analyze WarpIV's performance and scalability. Comparison of these simulations revealed significant quantitative differences between the 2D and 3D models, highlighting the critical need for high-resolution 3D simulations in conjunction with advanced in situ visualization and analysis to enable the accurate modeling and study of new breeds of particle accelerators.

In one 3D series, they tracked the run time for five categories of operations at 50-iteration simulation updates and found that, at each update, the visualization, analysis and I/O operations consumed 11-15 percent of the total time, while the rest was used by the simulation - a ratio the researchers consider "quite reasonable". They also found that the in situ approach reduced the I/O cost by a factor of more than 4000x.

There is great demand for 3D simulation codes that run in a reasonable time and perform accurate accelerator modelling with correct quantitative predictive power, JEan-Luc Vay emphasized.

"We want to be able to conduct experiments on ion acceleration, so in this case it is very important to have a working simulation tool to predict and analyze all kinds of experiments and test theories", stated Stepan Bulanov, a research scientist in the Berkeley Lab Laser Accelerator Center who works closely with Jean-Luc Vay. "And if the simulations can't keep pace with the experiment, it would slow us down significantly."

Having in situ tools like WarpIV will be increasingly valuable as supercomputers transition to more complex manycore architectures, Jean-Luc Vay added.

"WarpIV provides visualization in 3D that we would not have been able to obtain easily using our previous visualization tools, which were not scaling as well to many computational cores", he stated.

Source: DOE/Lawrence Berkeley National Laboratory

Back to Table of contents

Primeur weekly 2016-10-10

Exascale supercomputing

The incredible shrinking particle accelerator ...

Brookhaven Lab to play major role in 2 DOE exascale computing application projects ...

Quantum computing

More stable qubits in perfectly normal silicon ...

Focus on Europe

RSC supercomputers go West ...

Hardware

Allinea tools play vital role in advancing computational research at the VSC, Austria's largest HPC facility ...

Smallest transistor ever ...

Turning to the brain to reboot computing ...

Complex materials can self-organize into circuits, may form basis for multifunction chips ...

Wireless data centre on a chip aims to cut energy use ...

Adapteva announces 28nm 64-core Epiphany-IV microprocessor chip ...

SGI introduces unique scale-out solution for SAP HANA that protects investments when moving to real-time business ...

Applications

Clemson University scientists receive $1.8 million grant to combat Type 2 diabetes ...

Climate change intensifies night-time storms over Lake Victoria ...

Computer simulations explore how Alzheimer's disease starts ...

Rice University lab explores cement's crystalline nature to boost concrete performance ...

Rice University researchers say 2D boron may be best for flexible electronics ...

Large animals, such as the imperious African elephant, most vulnerable to impact of human expansion ...

Computer simulation finds dangerous molecule activity for ageing ...

Tornadogenesis ...

As hurricane heads up coast, a RENCI supercomputer swings into action ...

New drug candidate may reduce deficits in Parkinson's disease ...

XSEDE allocations awarded to 155 research teams across U.S. ...

OSC part of NSF-funded consortium for advancing research computing practices ...

NCSA awarded NSF grant to expand computational science education in food, energy, and water ...

Crosstalk analysis of biological networks for improved pathway annotation ...

The Cloud

Nimbix collaborates with IBM and NVIDIA to launch powerful GPU Cloud offering ...