The new DRIVE PX 2 delivers 8 teraflops of processing power. It has the processing power of 150 MacBook Pros. And it's the size of a lunchbox in contrast to other autonomous-driving technology being used today, which takes up the entire trunk of a mid-sized sedan.
"Self-driving cars will revolutionize society", Jen-Hsun Huang stated at the beginning of his talk. "And NVIDIA's vision is to enable them."
Volvo - known worldwide for safety and reliability - will be the first automaker to deploy DRIVE PX 2. In the world's first public trial of autonomous driving, the Swedish automaker next year will lease to customers 100 XC90 luxury SUVs outfitted with DRIVE PX 2 technology. The technology will help the vehicles drive autonomously around Volvo's hometown of Gothenburg, and semi-autonomously elsewhere.
DRIVE PX 2 has the power to harness a host of sensors to get a 360-degree view of the environment around the car. "The rear-view mirror is history", Jen-Hsun Huang stated.
Not so long ago, pundits had questioned the safety of technology in cars. Now, with Volvo incorporating autonomous vehicles into its plan to end traffic fatalities, that script has been flipped. Autonomous cars may be vastly safer than human-piloted vehicles.
Car crashes - an estimated 93 percent of them caused by human error - kill 1.3 million drivers each year. More American teenagers die from texting while driving than any other cause, including drunk driving.
There's also a productivity issue. Americans waste some 5.5 billion hours of time each year in traffic, costing the U.S. about $121 billion, according to an Urban Mobility Report from Texas A&M. And inefficient use of roads by cars wastes even vaster sums spent on infrastructure.
Self-driving solutions based on computer vision can provide some answers. But tackling the infinite permutations that a driver needs to react to - stray pets, swerving cars, slashing rain, road construction crews - is far too complex a programming challenge.
Deep learning enabled by NVIDIA technology can address these challenges. A highly trained deep neural network - residing on supercomputers in the Cloud - captures the experience of many tens of thousands of hours of road time.
Jen-Hsun Huang noted that a number of automotive companies are already using NVIDIA's deep learning technology to power their efforts, getting speed-ups of 30-40x in training their networks compared with other technology. BMW, Daimler and Ford are among them, along with innovative Japanese start-ups like Preferred Networks and ZMP. And Audi said it was able in four hours to do training that took it two years with a competing solution.
NVIDIA's end-to-end solution for deep learning starts with NVIDIA DIGITS, a supercomputer that can be used to train digital neural networks by exposing them to data collected during that time on the road. On the other end is DRIVE PX 2, which draws on this training to make inferences to enable the car to progress safely down the road. In the middle is NVIDIA DriveWorks, a suite of software tools, libraries and modules that accelerates development and testing of autonomous vehicles.
DriveWorks enables sensor calibration, acquisition of surround data, synchronization, recording and then processes streams of sensor data through a complex pipeline of algorithms running on all of the DRIVE PX 2's specialized and general-purpose processors.
During the event, Jen-Hsun Huang reminded the audience that machines are already beating humans at tasks once considered impossible for computers, such as image recognition. Systems trained with deep learning can now correctly classify images more than 96 percent of the time, exceeding what humans can do on similar tasks.
He used the event to show what deep learning can do for autonomous vehicles.
A series of demos drove this home, showing in three steps how DRIVE PX 2 harnesses a host of sensors - lidar, radar, cameras and ultrasonic - to understand the world around it, in real time, and plan a safe and efficient path forward.
The highlight of the demos was what Jen-Hsun Huang called the world's largest car infotainment system - an elegant block the size of a medium-sized bedroom wall mounted with a wide horizontal screen and a tall vertical one.
While a third larger screen showed the scene that a driver would take in, the wide demo screen showed how the car - using deep learning and sensor fusion - "viewed" the very same scene in real time, stitched together from its array of sensors. On its right, the huge portrait-oriented screen shows a highly precise map that marked the car's progress.