The festive inauguration of "SuperMuc Phase 2" was kicked off by a symbolic act: Dr. Ludwig Spaenle, Minister of Science of the Free State of Bavaria; Stefan Müller, Parliamentary State Secretary to the Federal Minister of Education and Research; Karl-Heinz Hoffmann, President of the Academy; Prof. Dr. Arndt Bode, Director of the LRZ; Martina Koederitz, General Manager of IBM Germany; and Christian Teismann, Vice President and General Manager, Global Account Business Levono, jointly pressed the "Red Button" symbolizing the start-up of the system expansion to HPC system SuperMUC.
The extension to SuperMUC, an IBM System X iDataPlex which first became operative in mid 2012, went according to the earlier defined systems roadmap. 86,016 processor cores in 6,144 Intel Xeon E5-2697 v3 processors, based on Intel's latest technology, were added to the previously available 155,656 processor cores, lifting the maximum theoretical computing power to now 6.8 Petaflops This performance boost comes with surprising little space requirements: While more than doubling the overall system performance, Phase II only requires only one fourth of the footprint of SuperMUC Phase 1.
Users will especially benefit from the fact that - like with SuperMUC Phase 1 - the system expansion refrains from using so called accellerators. Because of this, existing applications can continue to be used without any major adaptations to the software.
The LRZ supercomputing infrastructure now offers additional 7.5 Petabyte of SAN/DAS storage on GPFS Storage Servers (GSS). By combining IBM's Spectrum Scale technology with Lenovo System x-Servers, 5 PB of data are managed with an aggregated bandwidth of 100 GB/s across the distributed environment. A total main memory of just under 500 Terabyte is now available.
SuperMUC will continue to be one of the most energy-efficient supercomputers in the world. The proven hot water cooling technology, implemented by IBM, was also applied to Phase 2 of the installation: Through a network of micro channels, the cooling system circulates 45 centigrade warm water over active system components, such as processors and memory, to dissipate heat. Thus, no additional chillers are needed. The use of the latest processors, which allow an adaptation of their frequency to the specific needs of the computations, adds to the efforts of reducing the power usage. In combination with the use of energy optimizing operating software these energy saving means result in an overall reduction of system power usage by approximately 40 percent.
"Energy efficiency is a key component of today's computing devices - from smart phones to supercomputers", explained Arndt Bode, Chairman of the LRZ. "With Phase 2 of SuperMUC, LRZ continues to act as a pioneer in this field as we deliver proof that it is possible to significantly lower the energy consumption in data centres, thus drastically reduce the operating costs. SuperMUC Phase 2 opens up new ways for exciting and novel research - from the study of asthma of child patients to the origin of the universe - everything based on the use of one of the most powerful and energy-efficient supercomputers in the world, delivered by IBM and Lenovo."
Like SuperMUC Phase 1, the LRZ system expansion has been designed for exceptionally versatile deployment. The more than 150 different applications, which run on SuperMUC on average per year, range from solving problems in physics and fluid dynamics to a wealth of other scientific fields, such as aerospace and automotive engineering, medicine and bioinformatics, astrophysics and geophysics, amongst others. Professor Bode is confident that the now available system expansion will also be of great benefit to scientists in their pursuit of break-through findings to the great questions of our time. The results of the first two years of research supported by SuperMUC are available as a report .
The Leibniz Supercomputing Centre in Garching near Munich is one of the three member centres of the Gauss Centre for Supercomputing (GCS). As with the HPC systems of the other two GCS member centres - Hornet of the High Performance Computing Center Stuttgart (HLRS) and JUQUEEN of the Jülich Supercomputing Centre (JSC) - computing time on SuperMUC is granted to researchers in Germany and Europe through a scientific peer-review-process. Further information on computing time allocation is available at http://www.gauss-centre.eu/computing-time/ , respectively
Financing of SuperMUC">http://www.prace-ri.eu/Call-Announcements?lang=en
Financing of SuperMUC</a>.
As with the first installation phase, SuperMUC's system expansion including service expenses and operating costs - a total of 49 million euro - has been funded through project PetaGCS with the Federal Ministry of Education and Research (BMBF) and the Bavarian State Ministry of Science, Research and the Arts covering the expenses in equal shares.