Patrick O'Neill, HPC Systems Engineer, Boston Labs stated: "Using an NVIDIA DGX-1 as a client during testing we were able to achieve a throughput of 44GB/s for sequential reads and 840k IOPs for random reads. Both figures demonstrate the impressive performance that can be realised when using Flash-IO Talyn under varying workloads, even when only using a single client set-up."
The solution that was used consisted of four servers with each server including 4 NVMe drives - 16 in total.
Yaniv Romem, CTO for Excelero, added: "Modern GPUs used in AI and ML have an amazing appetite for data - up to 16GB/s per GPU. Starving that appetite with slow storage, or wasting time copying data back and forth wastes the most precious (expensive) resource you've purchased. Talyn is amazing because it gives you a building block to feed virtually any size NVIDIA GPU farm with scalable simplicity. The combination of NVMesh performance on the optimized Talyn hardware platform gives you the affordability of commodity hardware but with the ease of deployment of proprietary solutions."
The Boston Flash-IO Talyn was demonstrated with Excelero alongside an NVIDIA DGX-1 at ISC 2018.
This solution, based on Supermicro, and powered by Micron and Mellanox, represents a revolutionary leap forward for NVMe over fabrics extending the promise of SDS to low latency workloads by leveraging server-side NVMe-based flash storage to deliver a scalable converged infrastructure for next level performance. In a world where data is growing rapidly and the need to capture, present and transform must be reconsidered, the main aim of the system is to accelerate the data feed to your NVIDIA GPU servers ensuring your GPU can operate to its full potential.
Designed as a dynamic block to offer a cost-effective way of testing features and performance of NVME over fabrics technically applying Excelero NVMesh, the timing could not be more perfect in an era where there's major focus on deep learning solutions and how they are processing data faster than typical storage bandwidth, thus a fitting feature for a show focused on the drive of HPC solutions.
The NVIDIA DGX-1 is an AI supercomputing system powered by eight of the world's most advanced data centre GPUs - the NVIDIA Tesla V100 with Tensor Core architecture, and, incorporating next-generation NVIDIA NVLink. DGX-1 is purpose-built for the unique demands of AI and deep learning and leverages the NVIDIA GPU Cloud Deep Learning Software Stack to deliver maximized GPU accelerated performance.