20 Jun 2016 Frankfurt - The Mellanox HPC-X Scalable Framework, targeted for data centric and HPC applications, is extending its support for In-Network Computing which will dramatically increase HPC application performance. The Mellanox HPC-X Scalable Framework is a comprehensive software suite that includes the Message Passing Interface (MPI), Shared Memory (SHMEM) and Partitioned Global Address Space (PGAS) communications libraries for high-performance computing environments, and provides enhancements to significantly increase the scalability and performance of message communications in the network.
Building on its track record of pioneering innovation, Mellanox continues to invest and enable the HPC software ecosystem to take advantage of advanced data processing elements that extend to a new "network class of co-processors" which have already been introduced through efforts of co-design collaboration.
Recently, Mellanox introduced the SwitchIB-2, the world's first EDR 100 Gb/s intelligent network device that is able to manage and execute MPI collective communications algorithms within the network fabric. This allows the network fabric to process and aggregate data in-flight. This capability is known as "SHArP", Mellanox's Scalable Hierarchical Aggregation Protocol. HPC-X now provides robust support of the powerful hardware features of SwitchIB-2 and allows seamless integration with existing software applications.
"HPC data centres can easily reap the benefits from In-Network Computing capabilities today", stated Scot Schultz, director HPC and technical computing, Mellanox. "The performance of collective communications operations can be enhanced by at least ten fold by leveraging HPC-X with SHArP technology. Essentially, the performance benefits increase as the system size increases."
HPC-X also provides enhancements to support another evolutionary advancement - InfiniBand Routing. The latest SB7780 InfiniBand Router from Mellanox increases resiliency by providing the ability to segregate the data centre's network into several subnets. This, in turn, enables scaling of the fabric up to a virtually an unlimited number of nodes. HPC-X now enables applications to scale beyond a localized cluster resource and run across unique subnets of HPC clusters. Additionally, support for InfiniBand routing with any system topology configuration across subnets allows the expansion of system resources into complex data workflows.
"As HPC organizations work to solve critical scientific, business and research problems, their competitive edge relies on highly dense systems that can process results quickly and without interruption", stated Scott Misage, vice president and general manager, High Performance Computing, Hewlett Packard Enterprise. "With Mellanoxs EDR fabric, we have a robust set of building blocks that are critical in helping these customers achieve maximum system efficiency and eliminate bottlenecks to solve these problems even faster."
HPC-X not only provides complete communication libraries to support MPI, SHMEM and PGAS programming languages, but also supports advanced performance accelerators that take advantage of Mellanox's scalable smart interconnect solutions, profiling and benchmarking tools and more. Moreover, HPC-X enables rapid deployment and delivery of maximum application performance without the complexity and costs of licensing third-party tools and libraries.