Back to Table of contents

Primeur weekly 2013-06-24

Special

Human Brain Project to seek support in neuromorphic computing and non-volatile memory approach ...

Deploying new and more energy-efficient combustion technologies with exascale power ...

Parallelism, hybrid architectures, fault tolerance and power major challenges for extreme computing ...

The Cloud

Dell launches secure and flexible Cloud solution for U.S. Governments ...

Eurotech launches new release of Everyware Cloud to simplify device management in the Internet of Things ...

Thermax Ltd chooses IBM PureSystems and SmartCloud over Cisco and Dell ...

Cloud computing user privacy in serious need of reform, scholars say ...

VTI brings Internet of Things (IOT) and Cloud computing to test and measurement ...

Desktop Grids

Seeking testers for BOINC on Android ...

SETIspirit Windows GUI for SETI@home released ...

Using IBM's crowdsourced supercomputer, Harvard rates solar energy potential of 2.3 million new compounds ...

EuroFlash

projectiondesign ships ProNet.precision, camera-assisted warp and blend software ...

Remote Cluster Administration offers a unique solution to the HPC skills gap ...

New Cluster Installation Further Strengthens Regional HPC Infrastructure ...

Altair Engineering announces 8th UK Altair Technology Conference;to be held at the Heritage Motor Centre, Gaydon, Warwickshire ...

GENOA, MCQ-Composites to join Altair Partner Alliance Composites line-up ...

Altair broadens relationship with Siemens PLM Software to enhance data exchange for its CAE software users ...

Neuroscience to benefit from hybrid supercomputer memory ...

ISC'13 caps 28th Conference with new attendance, awards and more ...

USFlash

CANARIE upgrades 100G research & education network with Ciena ...

Linguists, computer scientists use supercomputers to improve natural language processing ...

UC San Diego launches new research computing programme ...

Which qubit my dear? New method to distinguish between neighbouring quantum bits ...

Making memories: Practical quantum computing moves closer to reality ...

Intel introducing new Lustre solution during Lustre event, addressing new Lustre markets ...

NetApp unveils clustered data ONTAP innovations that pave the way for software-defined storage ...

HP expands Converged Storage portfolio ...

UC San Diego researchers get access to Open Science Grid ...

HP extends support for OpenVMS through year 2020 ...

IBM expands support for Linux on Power Systems servers ...

Deploying new and more energy-efficient combustion technologies with exascale power


ExaCT Center
20 Jun 2013 Leipzig - In the session on High-End Systems towards Exascale, chaired by Erich Strohmaier, on Thursday, 20 June at the ISC'13 event Jacqueline Chen from Sandia National Laboratories elaborated on exascale co-design of combustion in turbulence within the ExaCT project. Jacqueline Chen explained why the focus is on combustion in this project. About 83% of the US energy comes from the combustion of fossil fuels. The national goals are to reduce greenhouse gas emissions by 80% by 2050 and to reduce petroleum usage by 25% by 2020. To this end, a new generation of high-efficiency, low emission combustion systems has to be built. The ExaCT project is working on new designs for IC engines, such as HCCI and fuel flexible turbines for power generation. There is a rapidly evolving fuel stream in which biodiesel is used for transporation and syngas is made from gasification processes. All these factors significantly increase the design space for new combustion technologies, according to Jacqueline Chen.

A second question which comes to mind is why one should be needing exascale to reach the emission goals in designing new combustion technologies.

According to Jacqueline Chen, the current design methodologies are largely phenmenological. There is a significant increase in computational capability that will dramatically reduce the design cycle for new combustion technologies and new fuels. The co-design centre is focusing on drirect numerical simulation methodologies. This is the scientific base for novel fuels at realistic pressures.

The goal of combustion exascale co-design is to consider all aspects of the combution simulation process from formulation and basic algorithms to programming environments to harware charateristics needed to enable combustion simulations on exascale architectures.

Jacqueline Chen told the audience that combustion is a surrogate for a much broader range of multiphysics computational science areas. The ExaCT partners will interact with the applied mathematics community on mathematical issues related to exascale. The petascale codes provide a starting point for the co-design process.

The mathematical tools used for co-design are, among others, S3D for compressible formulation and Low Mach Nubmer (LMC), a model that exploits separation of scales between acoustic cave speed and fluid motion. Next to this, other tools are required including second-order projection formulation, detailed kinetics and transport, and block-structure adaptive mesh refinement.

The expectation is that exascale will require a new code base, stated Jacqueline Chen, based on high-fidelity physics. The new code will support both compressible and low mach number formulations and provide support for embedded UQ and in situ analytics.

For instance, there has to be a relevant turbulence, pressure, and temperature. The domain and grid size involves 5 cm3 and 5 micron grid. The number of time steps is 6ms/5 ns timesteps, including 1.2e6 steps.

Jacqueline Chen warned that the petaflop work flow model will not scale. Performkng the simulation is not enough, the researchers need to analyze the results. The I/O bandwidth constraint makes it infeasible to save all the raw simulation data to persistent storage. Thus, the researchers will have to integrate the simulation and analysis.

Proxy machines are being used, involving a proxy applications solver for uniform grid compressible flow proxies and Low Mach Number and AMR tools.

For the co-design methodology, measurement alone is not sufficient. The researchers require an analytic performance model to validate the performance with hardware simulators and measurements and to confirm key productions.

The performance modelling tool chain consists in automatically predicting the performance for many input codes and software optimazations in order to predict the performance.

Byfl implemented as a language and architecure independent middle-stage compire pass providing answers to some initial question from the vendors: what is memory bandwidth per flop?

Jacqueline Chen confronted the audience with the following co-design questions: what is the instruction mix for the computational trhougput? How many registers are needed to capture scalar variables to avoid cache spills? As for memory bandwidth, how sensitive is the application to MB?

Even though transcendentals and division ops might be low in count, they can dominate the CPU time, warned Jacqueline Chen. "Neither software optimizations alone nor hardware optimizations will get us to the exascale, we have to apply both", insisted the speaker.

The previous analysis assumes an ideal network behaviour. The researchers have to use SST macro to model the contention.

The domain-specific language is a language of reduced expressiveness targeted at developers in a specific focused problem domain.

The researchers are exposing programming locality and independence and expressing parallelism in S3D.

The programming is based on logical. The tasks are coded in a familiar sequential style. Legion runtime uses region information to automatically extract parallelism and map tasks using the same data together to benefit from locality, according to Jacqueline Chen.

The widening gap between compute power and available I/O rates will make it infeasible to save all the necessary data for post-processing, Jacqueline Chen feared.

Leslie Versweyveld

Back to Table of contents

Primeur weekly 2013-06-24

Special

Human Brain Project to seek support in neuromorphic computing and non-volatile memory approach ...

Deploying new and more energy-efficient combustion technologies with exascale power ...

Parallelism, hybrid architectures, fault tolerance and power major challenges for extreme computing ...

The Cloud

Dell launches secure and flexible Cloud solution for U.S. Governments ...

Eurotech launches new release of Everyware Cloud to simplify device management in the Internet of Things ...

Thermax Ltd chooses IBM PureSystems and SmartCloud over Cisco and Dell ...

Cloud computing user privacy in serious need of reform, scholars say ...

VTI brings Internet of Things (IOT) and Cloud computing to test and measurement ...

Desktop Grids

Seeking testers for BOINC on Android ...

SETIspirit Windows GUI for SETI@home released ...

Using IBM's crowdsourced supercomputer, Harvard rates solar energy potential of 2.3 million new compounds ...

EuroFlash

projectiondesign ships ProNet.precision, camera-assisted warp and blend software ...

Remote Cluster Administration offers a unique solution to the HPC skills gap ...

New Cluster Installation Further Strengthens Regional HPC Infrastructure ...

Altair Engineering announces 8th UK Altair Technology Conference;to be held at the Heritage Motor Centre, Gaydon, Warwickshire ...

GENOA, MCQ-Composites to join Altair Partner Alliance Composites line-up ...

Altair broadens relationship with Siemens PLM Software to enhance data exchange for its CAE software users ...

Neuroscience to benefit from hybrid supercomputer memory ...

ISC'13 caps 28th Conference with new attendance, awards and more ...

USFlash

CANARIE upgrades 100G research & education network with Ciena ...

Linguists, computer scientists use supercomputers to improve natural language processing ...

UC San Diego launches new research computing programme ...

Which qubit my dear? New method to distinguish between neighbouring quantum bits ...

Making memories: Practical quantum computing moves closer to reality ...

Intel introducing new Lustre solution during Lustre event, addressing new Lustre markets ...

NetApp unveils clustered data ONTAP innovations that pave the way for software-defined storage ...

HP expands Converged Storage portfolio ...

UC San Diego researchers get access to Open Science Grid ...

HP extends support for OpenVMS through year 2020 ...

IBM expands support for Linux on Power Systems servers ...