That efficiency is key to building machines capable of reaching exascale speeds - that's 1 quintillion, or 1 billion billion, floating-point operations per second. Such a machine could help design efficient new combustion engines, model clean-burning fusion reactors, and achieve new breakthroughs in medical research.
GPUs - with their massively parallel architecture - have long powered some of the world's fastest supercomputers. More recently, theyve been key to an AI boom that's given us machines that perceive the world as we do, understand our language and learn from examples in ways that exceed our own.
AI can give every company a competitive advantage. That's why NVIDIA has assembled the world's most efficient - and one of the most powerful - supercomputers to aid NVIDIA in their own work.
Assembled by a team of a dozen engineers using 124 DGX-1s - the AI supercomputer in a box NVIDIA unveiled in April - SATURNV helps the company build the autonomous driving software that's a key part of the NVIDIA DRIVE PX 2 self-driving vehicle platform.
NVIDIA is also training neural networks to understand chipset design and very-large-scale-integration, so the engineers can work more quickly and efficiently. NVIDIA is using GPUs to help the company design GPUs.
Most importantly, SaturnV's power will give the company the ability to train - and design - new deep learning networks quickly.
Such systems can unlock the power of AI for enterprises, research groups, and academia.
DGX-1 is an appliance that integrates deep learning software, development tools and eight of the Tesla P100 GPUs - based on the new Pascal architecture - to pack computing power equal to 250 x86 servers into a device about the size of a stove top.
Since then, DGX-1 has been adopted by teams looking to harness AI in a wide variety of settings: