"The innovations built into our 200G HDR InfiniBand solutions, the performance advantages and the in-network computing acceleration engines, make HDR InfiniBand the interconnect-of-choice for the world's leading compute and storage infrastructures. We began shipping it to multiple customers in 2018, and we continue to see strong momentum for HDR InfiniBand across all geographies", stated Amir Prescher, senior vice president, end-user sales and business development at Mellanox Technologies. "The world-wide strategic race to Exascale supercomputing, the exponential growth in data we collect and need to analyze, and the new performance levels needed to support new scientific investigations and innovative product designs, all require the fastest and most advanced HDR InfiniBand interconnect technology. HDR InfiniBand solutions enable breakthrough performance levels and deliver the highest return on investment, enabling the next generation of the worlds leading supercomputers, hyperscale, Artificial Intelligence, Cloud and enterprise data centres."
The HDR InfiniBand Quantum CS8500 Modular Switch system provides 800 ports of 200 gigabit per second data throughput per direction - or 400 gigabit of full bi-directional data throughput, or 1600 ports of 100 gigabit per second data throughout per direction - or 200 gigabit of full bi-directional data throughput - switching capacity, making it the world's fastest high-speed smart switch system. Quantum CS8500 enables a total of 320 terabit per second switching capacity, 1.4 to 2.5 times higher compared to alternative products - equivalent to sending the content of 800 Ultra Definition Blu-Ray discs every second.
The HDR 200G InfiniBand in-network computing acceleration engines, including the Mellanox Scalable Hierarchical Aggregation and Reduction Protocol (SHARP), and other Message Passing Interface (MPI) offloads, provide the highest performance and scalability. For High-Performance Computing (HPC) and Artificial Intelligence (AI) applications, Mellanox SHARP improves the data aggregation operations by 3 times compared to CPU-based implementations, resulting in accelerating scientific simulations, data analysis and AI deep learning training applications.
The HDR InfiniBand adapters deliver the highest throughput and message rate in the industry, demonstrating the ability to send 215 million messages per second into the network, 1.5 times better compared to EDR InfiniBand.
The higher switch density of the HDR InfiniBand Quantum switch enables Mellanox users to optimize their use of space and power, and to maximize their data center return-on-investment. For departmental-scale implementations, a single Quantum QM8700 switch connects 80 servers reflecting 1.7 times higher than competitive products. For enterprise-scale, a 2-layer Quantum switch topology connects 3200 servers which is 2.8 times higher than competitive products. For hyperscale, a 3-layer Quantum switch topology connects 128,000 servers, or 4.6 times higher than competitive products.
Mellanox HDR InfiniBand end-to-end solution, including ConnectX-6 adapters, Quantum switches, the upcoming HDR BlueField system-on-a-chip (SoC), and LinkX cables and transceivers, delivers the best performance and scalability for HPC, Cloud, artificial intelligence, storage, and other applications, providing users with the capabilities to enhance their research, discoveries and product development. The HDR BlueField SoC will provide an optimized NVMe storage performance, enhanced security capabilities and the ability to offload user-defined algorithms from the main CPU to the network, resulting in lower latency and higher data-centre efficiency. Mellanox InfiniBand solutions supports all compute architectures, with guaranteed backward and forward compatibility.