11 Apr 2017 Richardson - Nimbix, a public Cloud provider for supercomputing-class workloads, deep learning and AI, has made available high-performance NVIDIA Pascal GPUs using the NVIDIA DGX-1 AI supercomputer in the Nimbix Cloud. For an on-demand rate, customers gain access to the industry-leading native bandwidth of eight NVIDIA NVLink interconnected NVIDIA Tesla P100 GPUs to launch or develop state-of the-art machine learning workflows, accelerated analytics and a host of other GPU-powered applications.
The Nimbix Cloud offers the most diverse set of GPU-powered machines available from a public Cloud provider, spanning the NVIDIA portfolio and supporting configurations for both Intel x86 and IBM Power8 processors to deliver the best performance and economics available for both enterprises and developers. Nimbix Cloud machines are interconnected with industry-leading 56Gbps FDR and 100Gbps EDR Infiniband for optimal GPU cluster performance.
"Nimbix has tremendous experience in GPU Cloud computing, going all the way back to NVIDIA's Fermi architecture", stated Steve Hebert, CEO of Nimbix. "We are looking forward to accelerating deep learning and analytics applications for customers seeking the latest generation GPU technology available in a public Cloud."
"Combining the optimized performance of NVIDIA DGX-1 with Nimbix's Cloud platform provides customers a flexible option to run their most challenging deep learning and AI workloads in an easy to use Cloud system", stated Charlie Doyle, senior director for DGX-1, NVIDIA.
In addition to the rich catalogue of DGX turn-key workflows for deep learning, developers can use the PushToCompute feature of the JARVICE platform to import the latest versions of their custom applications into the Nimbix Cloud and make them available for consumption at scale immediately. Each application, along with its dependencies, executes in JARVICE's container runtime environment, which provides superior performance and scale. This includes sub-second launch times, faster execution, seamless access to supercomputing GPUs, automated heterogeneous data management, and rapid workflow deployment across multiple compute nodes either in a parallel or distributed paradigm. PushToCompute also facilitates continuous integration and continuous deployment (CI/CD) for the entire life cycles of containerized applications.