Back to Table of contents

Primeur live 2011-06-23

Blog

The K in K computer, the world's fastest supercomputer ...

Some insights in the TOP500 ...

Prospect Association presents report "High Performance Computing in Europe: a vision for 2020" to the European Commission ...

Hardware

Petaflop/s: The why, the how and the what for? ...

Panasas gives customers a treat with scalable ActivStor11 ...

Four important questions on energy consumption in HPC ...

Memory makers ready to help out but have no control over the infrastructure ...

Applications

What will it take to achieve the simulation of the human brain? ...

Russia accelerates scientific innovation with GPU supercomputers ...

hpc-ch at ISC'11 to present HPC activities in Switzerland ...

University of Hamburg to simulate large data applications with Distributed Simulation and Virtual Reality environment ...

German automotive industry heavily relies on HLRS simulation expertise ...

Massive simulation and sensing devices generate great challenges and opportunities ...

TOP500

Japan reclaims top ranking on latest TOP500 list of world's supercomputers ...

Supercomputer "K computer" takes first place in world ...

The new TOP500 in figures and percentages ...

AMD sees 15 percent growth in six months on newest list of TOP500 supercomputers ...

Tera 100 once again crowned as Europe's most powerful supercomputer ...

The Lomonosov Supercomputer Has Attained a New Level of Performance ...

SGI Altix supercomputer system at NASA achieves Petaflop scale ...

The 2010 HPC status and the 2011 HPC weather forecast in ten chapters ...

Around the world of HPC in 30 minutes ...

The Grid

RealityServer: A powerful platform for 3D visualization and rendering now accessible in the Cloud ...

You ask "What Cloud?" It's Enterprise IT! ...

Company news

Altair's PBS Professional celebrates 20th anniversary and exhibits at International Supercomputing Conference in Hamburg ...

Adapteva selects E.T. International's software for breakthrough high-performance, low-power multicore processor ...

T-Platforms signs strategic reseller agreement with AEON Computing ...

International Supercomputing Conference sessions to feature lively exchanges of ideas ...

SGI to unveil groundbreaking HPC solutions at 2011 International Supercomputing Conference ...

QLogic automates installation, verification, monitoring and management of InfiniBand fabrics ...

Panasas introduces ActiveStor 11, delivering cost-effective parallel storage for HPC and big data workloads ...

Appro nabs exclusive supercomputing deal with three US National Laboratories ...

QLogic wins major deployment in NNSA's Tri-Labs cluster ...

T-Platforms to participate in supercomputer industry's premier European event, ISC11 ...

NetApp bolsters management capabilities, provides common foundation for enterprise customers' private, public, and hybrid Clouds ...

Bull announces the creation of BUX, the Bull User group for eXtreme Computing ...

Bright Cluster Manager 5.2 shipping with support for CUDA 4, ScaleMP, SLURM, LSF and enhanced multi-cluster management ...

Experts from Russia and EU collaborate on HOPSA; a new project to boost the performance of supercomputers ...

Companies at ISC'11 exhibition show media the innovations they've got in store for user ...

Xyratex exhibiting at International Supercomputing Conference 2011 ...

Supermicro showcases portfolio of HPC optimized solutions at ISC'11 ...

Mellanox FDR InfiniBand selected to deliver PetaScale performance for Europe’s fastest supercomputer ...

NetApp E5400 meets extreme customer demands in high-performance environments ...

VMware introduces vFabric 5, an integrated application platform for virtual and Cloud environments ...

Adaptive Computing announces 10x scalability for high throughput computing and next-generation HPC systems ...

University of Birmingham uses Adaptive Computing technology to reduce HPC costs ...

Adaptive Computing HPC Technology advances research capabilities for Europe's leading cancer research centre ...

Penguin Computing's public HPC Cloud is now powered by Adaptive Computin's Moab Cluster Suite ...

Numascale announces support for Supermicro 1042G-LTF and IBM System x3755 M3 with multi socket 8/12 core AMD Opteron 6000 Series ...

Mellanox expands availability of InfiniBand and Ethernet solutions with NEC LX Series Supercomputer through NEC HPC Europe ...

Mellanox and Lawrence Livermore National Laboratory demonstrate leading performance and scalability for HPC applications ...

Mellanox announces complete end-to-end FDR 56Gb/s InfiniBand interconnect solutions for uncompromised clustering performance and scalability ...

SGI to deliver breakthrough exascale computing solution ...

Platform's collaboration with CERN recognized as 2011 Computerworld Honors Laureate ...

Bright Computing, NVIDIA and NextIO to drive panel discussion on use of GPUs in HPC at International Supercomputing Conference in Germany ...

GE Global Research gets a Cray supercomputer ...

Italy's largest computing centre selects DataDirect Networks (DDN) and IBM to accelerate international scientific collaboration and discovery ...

SGI launches InfiniteStorage 5500 for high performance data solutions ...

SGI announces fifth-generation Altix ICE high performance compute solution ...

IBM certifies 40 Gigabit Ethernet line card from Force10 Networks ...

Fusion-io technology advances bioinformatics research at San Diego Supercomputer Center ...

Four important questions on energy consumption in HPC

23 Jun 2011 Hamburg - Steve Hammond from the National Renewable Energy Laboratory (NREL) chaired the panel on "Energy Efficiency or Net Zero Carbon by 2020" in the Thursday morning session at ISC'11. The panelists were Jean-Pierre Panziera from Bull, Michael K. Patterson from Intel, Volker Lindenstruth from the University of Frankfurt, and Taisuke Boku from the University of Tsukuba. Steve Hammond ignited the debate by stating that servers and data centres represent one of the fastest growing sectors in energy consumption. In the USA alone, it is estimated that servers and data centres consume a big part of the available energy. Taking a holistic view of computng and data centres reveals that there is significant potential for improving energy efficiency and overall sustainability of data centres.

Every watts saved on power efficient servers, storage systems, and related equipment helps, according to Steve Hammond. The current energy density of PV is about 5 acres per MW where as a data centre may require 1MW continuously per 1000 square feet. Any metric for sustainability should account for carbon emissions, water use, and waste.

The NREL has a new near Net Zero Energy 222,000 square feet office building with approximately 800 persons of staff and a 200 KW enterprise data centre. The construction cost is consistent with similar buildings. The energy usage amounts at 35.1 kBtu/sf/year including the IT data centre. The energy use is 39% of the average US office building. There is an extensive use of passive heating and cooling, and day lighting. The building will save about 3000 tons of carbon equivalent emissions per year.

Every watt counts, Steve Hammond continued. The whole building energy use amounts to 283 watts per occupant.

The new NREL ESIF HPC data centre will have a HPC Petascale system and HPC capapibility in 2012. There is a 20 year planning horizon for the energy data hub. The insight centre will contain a scientific data visulalization facility with collaboration and interaction.

The showcase facility will use 10MW for 10,000 square feet and will leverage a favourable climate. The facility will use evaporative technology rather than cooling. The data centre is designed to capture 100% of heat load to liquid. One can treat this as the data centre equivalent of the visible man. Tour views will be organized into the pump room and the mechanical spaces, the colour code pipes, and the LCD monitors.

Up to 10% of heat load is ejected to air and there is 75F of "chilled" water supply. Steve Hammond stated that staff will have summer when they have to and winter because they want to.

For the panelists he had five questions:

1. What is the impact of location for the data centre energy efficiency and use of renewable energy?

2. Given the energy density as well as the intermittent nature of renewables such as solar and wind, what are the opportunities or impediments for use of renewable enrby to power data centres?

3. What about return on investment: can improving the sustainability of the data centre also have a positive impact?

4. What are the prospects for a net zero carbon data centre by 2020?

Panelist Taisuke Boku first talked about the situation in Japan after the earthquake on March 11, 2011. In Japan now the energy efficiency and energy saving is a serious and practical problem. With the nuclear reactor accident on 3.11 Japan lost 4 power plants and some other power plants are forced to stop or have not recovered after their maintenance.

The real crisis may come in this summer: the area covered by TEPCO is required to have a 15% power cut in day time for everything including supercomputer operation. A rapid solution more than a general understanding is required now for this area. The situation would be a good opportunity for acceleration toward a net zero society in Japan.

The human loss and damages are very serious after the triple disaster of the earthquake, tsunamis, and nuclear power plant problems. The Fukushima Dai-ichi nuclear power plant is located on the pacific coast about 220 km north of Tokyo.

The government requests more than 15% out of power consumption for all companies and people in Japan during summer from July to September. Some supercomputers must be stopped during summer time to meet the power cut-off request. The power restriction of supercomputers is now a major issue for designing future HPC systems. The K computer has a cogeneration system, and keeps the cooling power relatively low. The TUSBAME2.0 in Tokyo has a very good PUE.

Taisuke Boku answered Steve Hammond's questions as follows.

Even in Japan which is a relatively small country, the impact of location selection is large. For the decision making of which site to select for the K computer, more than 10 cities raised their hands. From the viewpoint of energy saving in summer, Hokkaido proclaimed itself as the best region.

The use of natural power might be good but we need to consider the side effect and stability of power generation, according to Taisuke Boku. What happens on the land for big solar systems? We should be thinking about the situation without it and compare accordingly. For stable power generation a combined and mixed system with different sources of electricity is required for constant supply. In Japan the 50Hz/60Hz problem is heavy hazard to inhibit electricity sharing.

The return on investment maybe can improve the sustainability of the data centres in Japan. Currently many big companies start to pay large efforts.

To the prospects for a net zero carbon data centre by 2020, Taisuke Boku did not know the answer. The cogeneration system is one of the essential key technologies to make the PUE come down to 1.0.

Stopping with air cooling and shift to liquid cooling might be a solution since the use of GPU computing is really hot, literally.

Volker Lindenstruth told the audience that the Main river is used to cool the system in Frankfurt. In the Green500 list, the Frankfurt system is ranked no. 8 and in the TOP500 it is no. 22. The cost amounts to 200 euro per core and the data centre has a PUE value of 1,07 with a CO2 neutral level.

For the next generation data centre, the University of Frankfurt is planning two buildings including one for the staff.

Jean-Pierre Panziera from Bull told the audience that electricity constitutes a significant part of the HPC budget. The industrial electricity prices are highly variable across Europe. Green energy is expensive. The cost of 1MW is 1 million euro per year. The average price is 11 euro/kWh.

Cooling and power usage effectiveness brings down the PUE value via air-cooling, a water-cooled door, direct-liquid cooling, and co-generation.

The HPC system Carbon footprint life cycle consists of manufacture, use, and recycling. As far as the HPC system variable Carbon footprint is concerned, there is a 1 year use kgCO2/kWh variability in Europe. In

France, it is 0,09.

Jean-Pierre Panziera proposed Extreme Factory which is HPC as a Service. It means sustainable HPC. The HPC systems are getting more powerful and are consuming more and more. Non-renewable energy price is rising fast and the environmental awareness of society is growing. Companies like Bull have to design energy efficient systems and manufacture environment-friendly systems. The data centres should be optimized by using green energy.

For this we have to add a high envrionmental performance (HEP) focus to HPC.

Michael K. Patterson from Intel had five topics in store: the energy efficient data centre; metrics that will lead us into sustainability; energy efficient computing; exascale; and 98% net zero and the carbon return on investment (ROI): HPC can drive Carbon reduction through the 98% and be net-zero itself.

The PUE value is simple and effective: total energy/IT energy is the formula. We are getting down to very low levels of PUE, according to Michael K. Patterson.

The ERE value adds energy reuse to the PUE concept. ERE is total energy minus reusable energy / IT energy.

Water and carbon consist two new metrics for data centres. The annual site water use and the annual source energy water usage are two new metrics for water. The ERE has to be reduced to below 1. Moore's Law and IA nnovation promise double efficiency every 16 months.

The exascale will be part of the TOP500 by 2020, according to Michael K. Patterson who asked the audience whether HPC has a carbon ROI and if yes, how should we explore it?

We have to drive computig to be more energy efficient. Here, we talk about 2% opportunity. We have to use computing to improve energy savings outside information and communications technology. Here, 98% constitutes the big opportunity.

The energy panel concluded that by 2020, there will be enough energy for two exascale class HPC machines.

Leslie Versweyveld

Back to Table of contents

Primeur live 2011-06-23

Blog

The K in K computer, the world's fastest supercomputer ...

Some insights in the TOP500 ...

Prospect Association presents report "High Performance Computing in Europe: a vision for 2020" to the European Commission ...

Hardware

Petaflop/s: The why, the how and the what for? ...

Panasas gives customers a treat with scalable ActivStor11 ...

Four important questions on energy consumption in HPC ...

Memory makers ready to help out but have no control over the infrastructure ...

Applications

What will it take to achieve the simulation of the human brain? ...

Russia accelerates scientific innovation with GPU supercomputers ...

hpc-ch at ISC'11 to present HPC activities in Switzerland ...

University of Hamburg to simulate large data applications with Distributed Simulation and Virtual Reality environment ...

German automotive industry heavily relies on HLRS simulation expertise ...

Massive simulation and sensing devices generate great challenges and opportunities ...

TOP500

Japan reclaims top ranking on latest TOP500 list of world's supercomputers ...

Supercomputer "K computer" takes first place in world ...

The new TOP500 in figures and percentages ...

AMD sees 15 percent growth in six months on newest list of TOP500 supercomputers ...

Tera 100 once again crowned as Europe's most powerful supercomputer ...

The Lomonosov Supercomputer Has Attained a New Level of Performance ...

SGI Altix supercomputer system at NASA achieves Petaflop scale ...

The 2010 HPC status and the 2011 HPC weather forecast in ten chapters ...

Around the world of HPC in 30 minutes ...

The Grid

RealityServer: A powerful platform for 3D visualization and rendering now accessible in the Cloud ...

You ask "What Cloud?" It's Enterprise IT! ...

Company news

Altair's PBS Professional celebrates 20th anniversary and exhibits at International Supercomputing Conference in Hamburg ...

Adapteva selects E.T. International's software for breakthrough high-performance, low-power multicore processor ...

T-Platforms signs strategic reseller agreement with AEON Computing ...

International Supercomputing Conference sessions to feature lively exchanges of ideas ...

SGI to unveil groundbreaking HPC solutions at 2011 International Supercomputing Conference ...

QLogic automates installation, verification, monitoring and management of InfiniBand fabrics ...

Panasas introduces ActiveStor 11, delivering cost-effective parallel storage for HPC and big data workloads ...

Appro nabs exclusive supercomputing deal with three US National Laboratories ...

QLogic wins major deployment in NNSA's Tri-Labs cluster ...

T-Platforms to participate in supercomputer industry's premier European event, ISC11 ...

NetApp bolsters management capabilities, provides common foundation for enterprise customers' private, public, and hybrid Clouds ...

Bull announces the creation of BUX, the Bull User group for eXtreme Computing ...

Bright Cluster Manager 5.2 shipping with support for CUDA 4, ScaleMP, SLURM, LSF and enhanced multi-cluster management ...

Experts from Russia and EU collaborate on HOPSA; a new project to boost the performance of supercomputers ...

Companies at ISC'11 exhibition show media the innovations they've got in store for user ...

Xyratex exhibiting at International Supercomputing Conference 2011 ...

Supermicro showcases portfolio of HPC optimized solutions at ISC'11 ...

Mellanox FDR InfiniBand selected to deliver PetaScale performance for Europe’s fastest supercomputer ...

NetApp E5400 meets extreme customer demands in high-performance environments ...

VMware introduces vFabric 5, an integrated application platform for virtual and Cloud environments ...

Adaptive Computing announces 10x scalability for high throughput computing and next-generation HPC systems ...

University of Birmingham uses Adaptive Computing technology to reduce HPC costs ...

Adaptive Computing HPC Technology advances research capabilities for Europe's leading cancer research centre ...

Penguin Computing's public HPC Cloud is now powered by Adaptive Computin's Moab Cluster Suite ...

Numascale announces support for Supermicro 1042G-LTF and IBM System x3755 M3 with multi socket 8/12 core AMD Opteron 6000 Series ...

Mellanox expands availability of InfiniBand and Ethernet solutions with NEC LX Series Supercomputer through NEC HPC Europe ...

Mellanox and Lawrence Livermore National Laboratory demonstrate leading performance and scalability for HPC applications ...

Mellanox announces complete end-to-end FDR 56Gb/s InfiniBand interconnect solutions for uncompromised clustering performance and scalability ...

SGI to deliver breakthrough exascale computing solution ...

Platform's collaboration with CERN recognized as 2011 Computerworld Honors Laureate ...

Bright Computing, NVIDIA and NextIO to drive panel discussion on use of GPUs in HPC at International Supercomputing Conference in Germany ...

GE Global Research gets a Cray supercomputer ...

Italy's largest computing centre selects DataDirect Networks (DDN) and IBM to accelerate international scientific collaboration and discovery ...

SGI launches InfiniteStorage 5500 for high performance data solutions ...

SGI announces fifth-generation Altix ICE high performance compute solution ...

IBM certifies 40 Gigabit Ethernet line card from Force10 Networks ...

Fusion-io technology advances bioinformatics research at San Diego Supercomputer Center ...