Today's current architecture requires the CPU to handle memory copied between the GPU and the InfiniBand network. Mellanox was the lead partner in the development of NVIDIA GPUDirect, a technology that reduces the involvement of the CPU, reducing latency for GPU-to-InfiniBand communication by up to 30 percent. This communication time speedup can potentially add up to a gain of over 40 percent in application productivity when a large number of jobs are run on a server cluster. NVIDIA GPUDirect technology with Mellanox scalable HPC solutions is in use today in multiple HPC centres around the world, providing leading engineering and scientific application performance acceleration.
"As the popularity of GPU-based computing continues to increase, the importance of NVIDIA GPUDirect together with Mellanox's offloading-based InfiniBand technology is critical to our world-leading HPC systems", stated Dr. HUO Zhigang, The National Research Center for Intelligent Computing Systems (NCIC). "We have implemented NVIDIA GPUDirect technology with Mellanox ConnectX-2 InfiniBand adapters and Tesla GPUs and have seen the immediate performance advantages that it brings to our high-performance applications. Mellanox offloading technology is an essential component in this overall solution as it brings out the real capability to avoid the CPU for the GPU-to-CPU communications."
"The rapid increase in the performance of GPUs has made them a compelling platform for
computationally-demanding tasks in a wide variety of application domains", stated Michael Kagan, CTO at Mellanox Technologies. "To ensure high levels of performance, efficiency and scalability, data communication must be performed as fast as possible, and without creating extra load on the CPUs. NVIDIA GPUDirect technology enables NVIDIA GPUs, coupled with Mellanox ConnectX-2 40Gb/s InfiniBand adapters, to communicate faster, increasing overall system performance and efficiency."
GPU-based clusters are being used to perform compute-intensive tasks, like finite element computations, computational fluids dynamics, Monte-Carlo simulations, etc. Supercomputing centres are beginning to deploy GPUs in order to achieve new levels of performance. Since GPUs provide high core count and floating-point operation capabilities, high-speed InfiniBand networking is required to connect between the platforms in order to provide high throughput and the lowest latency for GPU-to-GPU communications. Mellanox ConnectX-2 adapters are the worlds only InfiniBand solutions that provide full offloading capabilities that are critical to avoiding CPU interrupts, data copies and systems noise, while maintaining high efficiencies for GPU-based clusters. Combined with the availability of NVIDIA GPUDirect and CORE-Direct technologies, Mellanox InfiniBand solutions are driving HPC to new performance levels.
"The combination of Mellanox 40Gb/s InfiniBand interconnects and GPU computing opens up a new world of possibilities to accelerate science and engineering research", stated Andy Keane, general manager, Tesla business at NVIDIA. Products coming to market employing the technologies of both NVIDIA and Mellanox help set the stage for next-generation, GPU-based clusters that are both high performance and highly efficient."