vComputeServer gives data centre administrators the option to run AI workloads on GPU servers in virtualized environments for improved security, utilization and manageability. IT administrators can use hypervisor virtualization tools like VMware vSphere, including vCenter and vMotion, to manage all their data centre applications, including AI applications running on NVIDIA GPUs.
Many companies deploy GPUs in the data centre, but GPU-accelerated workloads such as AI training and inferencing run on bare metal. These GPU servers are often isolated, with the need to be managed separately. This limits utilization and flexibility.
With vComputeServer, IT admins can better streamline management of GPU-accelerated virtualized servers while retaining existing workflows and lowering overall operational costs. Compared to CPU-only servers, vComputeServer with four NVIDIA V100 GPUs accelerates deep learning 50x faster, delivering performance near bare metal.
Today's announcement brings support to VMware vSphere along with existing support for KVM-based hypervisors including Red Hat and Nutanix. This allows admins to use the same management tools for their GPU clusters as they do for the rest of their data centre.
By expanding the vGPU portfolio with NVIDIA vComputeServer, NVIDIA is adding support for data analytics, machine learning, AI, deep learning, HPC and other server workloads. The vGPU portfolio also includes virtual desktop offerings - NVIDIA GRID Virtual PC and GRID Virtual Apps for knowledge workers and Quadro Virtual Data Center Workstation for professional graphics.
NVIDIA vComputerServer provides features like GPU sharing, so multiple virtual machines can be powered by a single GPU, and GPU aggregation, so one or multiple GPUs can power a virtual machine. This results in maximized utilization and affordability.
Features of vComputeServer include:
NVIDIA NGC, the hub for GPU-optimized software for deep learning, machine learning and HPC, offers over 150 containers, pre-trained models, training scripts and workflows to accelerate AI from concept to production, including RAPIDS, our CUDA-accelerated data science software.
RAPIDS offers a range of open-source libraries to accelerate the entire data science pipeline, including data loading, ETL, model training and inference. This enables data scientists to get their work done more quickly and significantly expands the type of models theyre able to create.
All NGC software can be deployed on virtualized environments like VMware vSphere with vComputeServer.
IT administrators can use hypervisor virtualization tools like VMware vSphere to manage all their NGC containers in VMs running on NVIDIA GPUs.
In addition, NVIDIA helps IT roll out GPU servers faster in production with validated NGC-Ready servers. And enterprise-grade support provides users and administrators with direct access to NVIDIA's experts for NGC software, minimizing risk and improving productivity.
Leading industry partners have shown support for NVIDIA vComputeServer, including Dell, Cisco and VMware, among others.
NVIDIA vComputeServer is available starting in August.