The event, held August 26-27 at OSC's BALE Theater, was organized by Dhabaleswar K. "DK" Panda, Ph.D., a professor of computer science at the Ohio State University. Dr. Panda is a longtime user of OSC HPC resources and a partner with the centre on several research projects.
Dr. Panda's Network-Based Computing Research Group developed and enhances the popular HPC system software package. The two-day event included talks from experts in the field, presentations from the MVAPICH2 team on tuning and optimization strategies for various components, trouble-shooting guidelines, contributed presentations, an open-microphone session and an interactive/hands-on session with the MVAPICH2 developers.
Message Passing Interface (MPI), the lingua franca of scientific parallel computing, is a standard for the communications library that a parallel application uses to share data among tasks and is available on a variety of parallel computer platforms. On the hardware side, InfiniBand is a widely used processor interconnect favoured for its open standards and high performance.
MVAPICH2 is a popular implementation of the MPI-3 standard prevalent on InfiniBand-based systems. In addition to OSC's HP-Intel Oakley Cluster and IBM 1350 Glenn Cluster, Dr. Panda's communications library is powering several of the world's fastest supercomputers, including the Stampede system at the Texas Advanced Computing Center at The University of Texas at Austin; the Pleiades array at the NASA Advanced Supercomputing facility at Ames Research Center near Mountain View, California; and Tsubame 2.0 cluster at the Global Scientific Information and Computing Center at the at Tokyo Institute of Technology.
The MVAPICH Users Group event is sponsored by OSC, Mellanox Technologies, Advanced Clustering Technologies and the Ohio State University.