NVIDIA has also released performance data illustrating that NVIDIA Tesla GPUs have improved performance for HPC applications by 3x over the Kepler architecture released two years ago. This significantly boosts performance beyond what would have been predicted by Moore's Law, even before it began slowing in recent years.
Additionally, NVIDIA's Tesla V100 GPU accelerators - which combine AI and traditional HPC applications on a single platform - are projected to provide the U.S. Department of Energy's (DOE's) Summit supercomputer with 200 petaflops of 64-bit floating point performance and over 3 exaflops of AI performance when it comes online later this year.
The Green500 list, released at the International Supercomputing Show in Frankfurt, is topped by the new TSUBAME 3.0 system, at the Tokyo Institute of Technology, powered by NVIDIA Tesla P100 GPUs. It hit a record 14.1 gigaflops per watt - 50 percent higher efficiency than the previous top system - NVIDIA's own SATURNV, which ranks no. 10 on the latest list.
Spots two through six on the new list are clusters housed at Yahoo Japan, Japan's National Institute of Advanced Industrial Science and Technology, Japan's Center for Advanced Intelligence Project (RIKEN), the University of Cambridge and the Swiss National Computing Center (CSCS), home to the newly crowned fastest supercomputer in Europe, Piz Daint. Other key systems in the top 13 measured systems powered by NVIDIA include E4 Computer Engineering, University of Oxford, and the University of Tokyo.
Systems built on NVIDIA's DGX-1 AI supercomputer - which combines NVIDIA Tesla GPU accelerators with a fully optimized AI software package - include RAIDEN at RIKEN, JADE at the University of Oxford, a hybrid cluster at a major social media and technology services company and NVIDIA's own SATURNV.
"Researchers taking on the world's greatest challenges are seeking a powerful, unified computing architecture to take advantage of HPC and the latest advances in AI", stated Ian Buck, general manager of Accelerated Computing at NVIDIA. "Our AI supercomputing platform provides one architecture for computational and data science, providing the most brilliant minds a combination of capabilities to accelerate the rate of innovation and solve the unsolvable."
"With TSUBAME 3.0 supercomputer our goal was to deliver a single powerful platform for both HPC and AI with optimal energy efficiency as one of the flagship Japanese national supercomputers", stated Professor Satoshi Matsuoka of the Tokyo Institute of Technology. "The most important point is that we achieved this result with a top-tier production machine of multi-petascale. NVIDIA Tesla P100 GPUs allowed us to excel at both these objectives so we can provide this revolutionary AI supercomputing platform to accelerate our scientific research and education of the country."
NVIDIA revealed progress toward achieving exascale levels of performance, with anticipated leaps in speed, efficiency and AI computing capability for the Summit supercomputer, scheduled for delivery later this year to the Oak Ridge Leadership Computing Facility, a DOE Office of Science User Facility at Oak Ridge National Laboratory.
Featuring Tesla V100 GPU accelerators, Summit is projected to deliver 200 petaflops of performance - compared with 93 petaflops for the world's current fastest system, China's TaihuLight. Additionally, Summit is expected to have strong AI computing capabilities, achieving more than 3 exaflops of half-precision Tensor Operations.
"AI is extending HPC and together they are accelerating the pace of innovation to help solve some of the world's most important challenges", stated Jeff Nichols, associate laboratory director of the Computing and Computational Science Directorate at Oak Ridge National Laboratory. "Oak Ridge's pre-exascale supercomputer, Summit, is powered by NVIDIA Volta GPUs that provide a single unified architecture that excels at both AI and HPC. We believe AI supercomputing will unleash breakthrough results for researchers and scientists."
The extreme computing capabilities of the V100 GPU accelerators will be available later this year as a service through several of the world's leading Cloud service providers. Companies that have stated their enthusiasm and planned support for Volta-based services include Amazon Web Services, Baidu, Google Cloud Platform, Microsoft Azure and Tencent.
To extend the reach of Volta, NVIDIA also announced it is making new Tesla V100 GPU accelerators available in a PCIe form factor for standard servers. With PCIe systems, as well as previously announced systems using NVIDIA NVLink interconnect technology, coming to market, Volta promises to revolutionize HPC and bring groundbreaking AI technology to supercomputers, enterprises and Clouds.
Specifications of the PCIe form factor include:
NVIDIA Tesla V100 GPU accelerators for PCIe-based systems are expected to be available later this year from NVIDIA reseller partner and manufacturers, including Hewlett Packard Enterprise (HPE).
"HPE is excited to complement our purpose-built HPE Apollo systems innovation for deep learning and AI with the unique, industry-leading strengths of the NVIDIA Tesla V100 technology architecture to accelerate insights and intelligence for our customers", stated Bill Mannel, vice president and general manager of HPC and AI at Hewlett Packard Enterprise. "HPE will support NVIDIA Volta with PCIe interconnects in three different systems in our portfolio and provide early access to NVLink 2.0 systems to address emerging customer demand."