Deep Learning Inference on P40 GPUs

22 Mar 2017 Round Rock - On a Dell EMC innovation blog, several authors describe their experiences with inference performance in deep learning with NVIDIA TensorRT library on P40 and M40 GPUs. As a result, the INT8 support in P40 is about 3x faster than FP32 mode in P40 and 4.4x faster than FP32 mode in the previous generation GPU M40. Multiple GPUs can increase the inferencing performance linearly because of no communications and synchronizations.

Ad Emmen