Delivering 2,5 petaflops of AI performance, DGX Station A100 is the only workgroup server with four of the latest NVIDIA A100 Tensor Core GPUs fully interconnected with NVIDIA NVLink, providing up to 320GB of GPU memory to speed breakthroughs in enterprise data science and AI.
DGX Station A100 is also the only workgroup server that supports NVIDIA's multi-instance GPU (MIG) technology. With MIG, a single DGX Station A100 provides up to 28 separate GPU instances to run parallel jobs and support multiple users without impacting system performance.
"DGX Station A100 brings AI out of the data center with a server-class system that can plug in anywhere", stated Charlie Boyle, vice president and general manager of DGX systems at NVIDIA. "Teams of data science and AI researchers can accelerate their work using the same software stack as NVIDIA DGX A100 systems, enabling them to easily scale from development to deployment."
Organisations around the world have adopted DGX Station to power AI and data science across industries such as education, financial services, government, healthcare and retail. These AI leaders include:
While DGX Station A100 does not require data-centre-grade power or cooling, it is a server-class system that features the same remote management capabilities as NVIDIA DGX A100 data centre systems. System administrators can easily perform any management tasks over a remote connection when data scientists and researchers are working at home or in labs.
DGX Station A100 is available with four 80GB or 40GB NVIDIA A100 Tensor Core GPUs, providing options for data science and AI research teams to select a system according to their unique workloads and budgets.
To power complex conversational AI models like BERT Large inference, DGX Station A100 is more than 4x faster than the previous generation DGX Station. It delivers nearly a 3x performance boost for BERT Large AI training.
For advanced data centre workloads, DGX A100 systems will be available with the new NVIDIA A100 80GB GPUs, doubling GPU memory capacity to 640GB per system to enable AI teams to boost accuracy with larger datasets and models.
The new NVIDIA DGX A100 640GB systems can also be integrated into the NVIDIA DGX SuperPOD Solution for Enterprise, allowing organizations to build, train and deploy massive AI models on turnkey AI supercomputers available in units of 20 DGX A100 systems.
The first installments of NVIDIA DGX SuperPOD systems with DGX A100 640GB will include the Cambridge-1 supercomputer being installed in the U.K. to supercharge health care research, as well as the new University of Florida HiPerGator AI supercomputer that will power AI-infused discovery across the Sunshine State.
NVIDIA DGX Station A100 and NVIDIA DGX A100 640GB systems will be available this quarter through NVIDIA Partner Network resellers worldwide. An upgrade option is available for NVIDIA DGX A100 320GB customers.
You can register for a DGX Station webinar on December 2.
Introducing NVIDIA DGX Station A100.
OpenMP ARB releases OpenMP 5.1 with vital usability enhancements ...
Lenovo delivers breakthrough HPC and AI solutions to help customers build a smarter way forward ...
NVIDIA announces A100 80GB GPU, supercharging world's most powerful GPU for AI supercomputing ...
NVIDIA DGX Station A100 offers researchers AI Data-Center-in-a-Box ...
Breadth, depth and scientific value of computational research in New England on display at SC20 ...
Fugaku tops supercomputer rankings again as Arm HPC ecosystem expands ...
Three RSC supercomputers represent Russia in global IO500 rating ...