4 Apr 2012 Tokyo - The Research Institute for Information Technology at Kyushu University has placed orders for a new system, consisting of a Supercomputer System and High-performance Computational Server System. The Supercomputer System will use a configuration of Fujitsu's PRIMEHPC FX10 nodes, and the High-performance Computational Server System will employ a cluster configuration of PRIMERGY CX400 x86 servers. Combined, both systems will achieve a total theoretical peak performance of 691.7 teraflops, making the new supercomputer system the largest-scale system in the Kyushu region of Japan.
The new supercomputer system will roll out operations beginning in July 2012. It will be used to support the Research Institute for Information Technology's advanced research and educational activities in a variety of fields of science and technology. It is also expected to be used by corporations.
Kyushu University's activities focus on offering education and research in Japan's Kyushu region, where it is the largest national university. The Research Institute for Information Technology is a shared facility available for use by university faculty, graduate students and other researchers from across Japan in their academic research. Since 2007, the Research Institute for Information Technology has utilized a supercomputer system employing Fujitsu's PRIMEQUEST and PRIMERGY servers. In light of recent industry trends, however, in which world-class massively parallel computers have been deployed in Japan, the university has been planning to upgrade its systems and application development environments to enable them to perform calculations of even greater scale.
The Research Institute for Information Technology chose Fujitsu's supercomputer system for its superior computing performance, energy efficiency, execution performance and availability. It can also be employed to develop and optimize applications for use with the K computer supercomputer.
The calculation nodes of the new system will use a configuration of 768 PRIMEHPC FX10 nodes and 1,476 PRIMERGY CX400 nodes, thereby achieving a total theoretical computational speed of 691.7 teraflops. As a result, the new supercomputer system is anticipated to be the largest-scale system in Kyushu, and one of the handful that exists throughout the country.
Combining high performance, scalability, and reliability with superior energy efficiency, PRIMEHPC FX10 further enhances Fujitsu's technology used in the K computer, which achieved the world's top-ranked performance. PRIMERGY CX400 is a high-density server that can support 84 nodes per rack, or roughly twice the number of nodes of conventional 1U rack servers, making it the ideal x86 server for high performance computing.
For its HPC middleware, the system will deploy Technical Computing Suite for peta-scale systems together with 66 PRIMERGY series servers as login nodes. ETERNUS storage systems, combined for a total capacity of 4.6 petabytes, will be deployed for storage. The system's file system will be constructed using the high-capacity, high-performance and highly reliable FEFS distributed file system.
Mutsumi Aoyagi, Director, Research Institute for Information Technology, stated: "Many of our centre's users are Japan's top researchers, and they are also the K computer users. As an organisation providing resources for the High-Performance Computing Infrastructure (HPCI) initiative, which began full-fledged operations this fiscal year, we hope to contribute to the further development of Japan's computational science capabilities by deploying PRIMEHPC FX10, which is highly compatible with the K computer, and the highly energy-efficient and high-density PRIMERGY CX400."
"Later this year, we plan to equip the system with high-performance GPGPUs, which we anticipate will enable a dramatic improvement in the performance of applications in areas such as computational fluid dynamics and molecular science. Moreover, the High-performance Computational Server System will include a visualization server equipped with remote screen sharing functionality and high-capacity memory, as well as a variety of visualization tools. This will make it possible to perform pre/post-processing of computations for massive volumes of data."