With full support based on the message passing interface (MPI) MVAPICH2-2.0b release by the Ohio State University, the following features and capabilities are enabled:
"Using enhanced MVAPICH2-2.0b with NVIDIA GPUDirect RDMA-based designs, end-users will now see a significant reduction in latency for small messages and an increase in bandwidth for large messages", stated Professor Dhableswar K. (DK) Panda of the Ohio State University. "The MVAPICH2-2.0b design with NVIDIA GPUDirect RDMA support is able to deliver excellent performance for K40 GPUs using Connect-IB FDR adapters."
"We see increased adoption of FDR InfiniBand and NVIDIA GPUDirect RDMA technology by leading commercial partners, government agencies, as well as academia and research institutions", stated Gilad Shainer, vice president of marketing at Mellanox Technologies. "Mellanox's FDR InfiniBand solutions with NVIDIA GPUDirect RDMA are providing the highest level of application performance, scalability and efficiency for GPU-based clusters."
"With 12GB of ultra-fast GDDR5 memory and support for PCIe Gen 3 interconnect technology, the new Tesla K40 accelerators are ideal for ultra-large scale scientific and commercial workloads", stated Ian Buck, vice president of Accelerated Computing at NVIDIA. "When coupled with NVIDIA GPUDirect RDMA technology, Mellanox InfiniBand solutions unlock new levels of performance for HPC customers by enabling direct memory access from the GPU across the InfiniBand fabric."
Beta-level support for NVIDIA GPUDirect RDMA and MVAPICH2-2.0b-GDR will be publically available this quarter with the upcoming MLNX_OFED 2.1 release.