SCELSE's research into understanding and controlling environmental microbial communities requires incredibly complex data analysis on a large scale. Running and analyzing these massive datasets has traditionally been a highly time-intensive process for life science researchers, who frequently work against the clock to meet their deadlines, especially in clinical and public health related projects. With these concerns in mind, system performance was a key consideration in the centers search for a new High Performance Computing (HPC) solution, and the primary reason for choosing Cray.
"We're thrilled that SCELSE has placed its trust in Cray to power its scientific research", stated Nick Gorga, vice president of sales for the Asia-Pacific region at Cray. "Life science research is advancing rapidly, and Cray's technology is helping research centres across the globe manage the growing complexity and volume of their data sets, and accelerate time-to-value of data analysis. With the performance, capacity and scalability of the Cray CS500, SCELSE's groundbreaking research will no longer be constricted by the limits of its computational infrastructure."
NTU Prof Stephan Schuster, Research Director at SCELSE, stated: "The new Cray supercomputer at SCELSE will play an important role in enabling us to keep pace with the evolving needs and computational demands of interdisciplinary research. By rapidly speeding up the time it takes to process and analyze data, we will be able to save time and energy, and focus on more important scientific endeavours."
The CS500 was selected for its superior price-performance, scalability and functionality - surpassing all other solutions in an open tender. Crays price/performance matrix appealed to SCELSE, and the core count of the system will contain more than 12.000 AMD EPYC processor cores.
Cray's CS500 cluster supercomputers are highly scalable, flexible and customizable systems designed to provide users with a wide selection of configurations. The CS500 is uniquely suited to handle the broadest range of simulations and run data-intensive workloads smoothly at capacity.
The first phase of the system has been delivered and put into production in June 2019.