"We have been working with NASA for many years, to provide the key interconnect solutions for NASA supercomputing platforms", stated Gilad Shainer, vice president of marketing at Mellanox technologies. "By leveraging 100G EDR InfiniBand and the multiple In-Network Computing engines, such as MPI offloads, RDMA and more, NASA will be able to maximize their data centre return on investment."
"HPE is excited to have partnered with Mellanox to deploy NASA Ames's next generation HPC cluster based on HPE's new SGI 8600 liquid cooled platform with the Mellanox ConnectX-5 interconnect", stated Craig Yamasaki, director product management, High Performance Computing and AI at HPE. "HPE has coupled its Hypercube topology with the new ConnectX-5 interface to enable system expansion without the use of external switches while delivering cost savings and performance improvements."
Mellanox's intelligent and In-Network Computing capabilities incorporated in ConnectX-5 InfiniBand adapters and Switch-IB2 InfiniBand switches enable advance data processing and real time analytics, which result in world-leading applications performance and scalability.