Back to Table of contents

Primeur weekly 2020-05-18

Focus

SiPearl's chip will be a platform open to other start-ups to develop accelerators for - An interview with Philippe Notton from SiPearl ...

Exascale supercomputing

Atos launches first supercomputer equipped with NVIDIA A100 GPU ...

Quantum computing

Start of the Fraunhofer Competence Center "Quantum Computing Baden-Württemberg" ...

ADVA brings post-quantum security to packet networks ...

Atos delivers its Quantum Learning Machine to Japan ...

Machine learning cracks quantum chemistry conundrum ...

Making quantum 'waves' in ultrathin materials ...

Seeqc UK awarded GBP 1,8 million in grants to advance quantum computing initiatives ...

Rising investments in quantum technology to boost market growth ...

Total is exploring quantum algorithms to improve CO2 capture ...

Middleware

Mentor Graphics and the University of Basel join the OpenMP effort ...

Hardware

NVIDIA's new Ampere data centre GPU in full production ...

Karlsruhe Institute of Technology procures new supercomputer ...

DDN announces A3I suppport for NVIDIA DGX A100 ...

Yvonne Walker recognized as one of CRN's 2020 Women of the Channel ...

Supermicro expands portfolio with fully integrated NVIDIA A100 GPU-powered systems delivering 5 Petaflop/s of AI performance in a single 4U server ...

Artificial synapses on design ...

Light, fantastic: the path ahead for faster, smaller computer processors ...

OpenFabrics Alliance (OFA) and Gen-Z Consortium announce MoU agreement ...

Tachyum achieves 90 percent of silicon laid for its Prodigy Universal Processor ...

Intel Capital invests $132 million in 11 disruptive technology start-ups ...

Inspur releases five new AI servers powered by NVIDIA A100 Tensor Core GPUs ...

Research and Markets issues Global High-Performance Computing as a Service Market Analysis and Industry Forecasts to 2026 ...

Asperitas and maincubes partner to deliver immersion cooling solutions in dedicated colocation suites ...

Delivery of supercomputer Fugaku completed ...

Applications

Supercomputer simulations help advance electrochemical reaction research ...

3D-printed nuclear reactor promises faster, more economical path to nuclear energy ...

GCS centres support research to mitigate impact of COVID-19 pandemic ...

NIST scientists create new recipe for single-atom transistors ...

ACM service awards recognize leaders who strengthen the computing community ...

New research launched on airborne virus transmission in buildings ...

Seeing the universe through new lenses ...

Liqid delivers industry's fastest single-socket server ...

National Science Foundation grant backs funcX: A smart, automated delegator for computational research ...

Educating the next generation of supercomputer users with Blue Waters ...

The Cloud

Innovium introduces TERALYNX 8, world's highest performance programmable switch for data centre networks with 25,6 Tbps throughput and support for 112 Gbps SerDes I/O ...

Wiwynn unveils standalone rack-level liquid cooling solution for OCP ORV3 at 2020 OCP Virtual Summit ...

Crowd Machine Platform is powered by Oracle Cloud and now available in the Oracle Cloud Marketplace ...

NVIDIA's new Ampere data centre GPU in full production


14 May 2020 Santa Clara - The first GPU based on the NVIDIA Ampere architecture, the NVIDIA A100, is in full production and shipping to customers worldwide. The A100 draws on design breakthroughs in the NVIDIA Ampere architecture - offering the company's largest leap in performance to date within its eight generations of GPUs - to unify AI training and inference and boost performance by up to 20x over its predecessors. A universal workload accelerator, the A100 is also built for data analytics, scientific computing and Cloud graphics.

"The powerful trends of Cloud computing and AI are driving a tectonic shift in data centre designs so that what was once a sea of CPU-only servers is now GPU-accelerated computing", stated Jensen Huang, founder and CEO of NVIDIA. "NVIDIA A100 GPU is a 20x AI performance leap and an end-to-end machine learning accelerator - from data analytics to training to inference. For the first time, scale-up and scale-out workloads can be accelerated on one platform. NVIDIA A100 will simultaneously boost throughput and drive down the cost of data centres."

New elastic computing technologies built into A100 make it possible to bring right-sized computing power to every job. A multi-instance GPU capability allows each A100 GPU to be partitioned into as many as seven independent instances for inferencing tasks, while third-generation NVIDIA NVLink interconnect technology allows multiple A100 GPUs to operate as one giant GPU for ever larger training tasks.

The world's leading Cloud service providers and systems builders that expect to incorporate A100 GPUs into their offerings include: Alibaba Cloud, Amazon Web Services (AWS), Atos, Baidu Cloud, Cisco, Dell Technologies, Fujitsu, GIGABYTE, Google Cloud, H3C, Hewlett Packard Enterprise (HPE), Inspur, Lenovo, Microsoft Azure, Oracle, Quanta/QCT, Supermicro and Tencent Cloud.

Among the first to tap into the power of NVIDIA A100 GPUs is Microsoft, which will take advantage of their performance and scalability.

"Microsoft trained Turing NLG, the largest language model in the world, using the current generation of NVIDIA GPUs", stated Mikhail Parakhin, corporate vice president at Microsoft. "We will train dramatically bigger AI models using thousands of NVIDIA's new generation of A100 GPUs in Azure at scale to push the state of the art on language, speech, vision and multi-modality."

DoorDash, an on-demand food platform serving as a lifeline to restaurants during the pandemic, notes the importance of having a flexible AI infrastructure.

"Modern and complex AI training and inference workloads that require a large amount of data can benefit from state-of-the-art technology like NVIDIA A100 GPUs, which help reduce model training time and speed up the machine learning development process", stated Gary Ren, machine learning engineer at DoorDash. "In addition, using Cloud-based GPU clusters gives us newfound flexibility to scale up or down as needed, helping to improve efficiency, simplify our operations and save costs."

Other early adopters include national laboratories and some of the world's leading higher education and research institutions, each using A100 to power their next-generation supercomputers. They include:

  • Indiana University, in the U.S., whose Big Red 200 supercomputer is based on HPE's Cray Shasta system, will support scientific and medical research, and advanced research in AI, machine learning and data analytics.
  • Jülich Supercomputing Centre, in Germany, whose JUWELS booster system being built by Atos is designed for extreme computing power and AI tasks.
  • Karlsruhe Institute of Technology, in Germany, which is building its HoreKa supercomputer with Lenovo, will be able to carry out significantly larger multi-scale simulations in the field of materials sciences, earth system sciences, engineering for energy and mobility research, and particle and astroparticle physics.
  • Max Planck Computing and Data Facility, in Germany, with its next-generation supercomputer Raven built by Lenovo, provides high-level support for the development, optimization, analysis and visualization of high-performance-computing applications to Max Planck Institutes.
  • The U.S. Department of Energy's National Energy Research Scientific Computing Center, located at Lawrence Berkeley National Laboratory, which is building its next-generation supercomputer Perlmutter based on HPE's Cray Shasta system to support extreme-scale science and develop new energy sources, improve energy efficiency and discover new materials.

The NVIDIA A100 GPU is a technical design breakthrough fueled by five key innovations:

  • NVIDIA Ampere architecture: At the heart of A100 is the NVIDIA Ampere GPU architecture, which contains more than 54 billion transistors, making it the world's largest 7-nanometer processor.
  • Third-generation Tensor Cores with TF32: NVIDIA's widely adopted Tensor Cores are now more flexible, faster and easier to use. Their expanded capabilities include new TF32 for AI, which allows for up to 20x the AI performance of FP32 precision, without any code changes. In addition, Tensor Cores now support FP64, delivering up to 2.5x more compute than the previous generation for HPC applications.
  • Multi-instance GPU: MIG, a new technical feature, enables a single A100 GPU to be partitioned into as many as seven separate GPUs so it can deliver varying degrees of compute for jobs of different sizes, providing optimal utilization and maximizing return on investment.
  • Third-generation NVIDIA NVLink: Doubles the high-speed connectivity between GPUs to provide efficient performance scaling in a server.
  • Structural sparsity: This new efficiency technique harnesses the inherently sparse nature of AI math to double performance.

Together, these new features make the NVIDIA A100 ideal for diverse, demanding workloads, including AI training and inference as well as scientific simulation, conversational AI, recommender systems, genomics, high-performance data analytics, seismic modelling and financial forecasting.

The NVIDIA DGX A100 system, also announced today, features eight NVIDIA A100 GPUs interconnected with NVIDIA NVLink. It is available immediately from NVIDIA and approved partners.

Alibaba Cloud, AWS, Baidu Cloud, Google Cloud, Microsoft Azure, Oracle and Tencent Cloud are planning to offer A100-based services.

Additionally, a wide range of A100-based servers are expected from the world's leading systems manufacturers, including Atos, Cisco, Dell Technologies, Fujitsu, GIGABYTE, H3C, HPE, Inspur, Lenovo, Quanta/QCT and Supermicro.

To help accelerate development of servers from its partners, NVIDIA has created HGX A100 - a server building block in the form of integrated baseboards in multiple GPU configurations.

The four-GPU HGX A100 offers full interconnection between GPUs with NVLink, while the eight-GPU configuration offers full GPU-to-GPU bandwidth through NVIDIA NVSwitch. HGX A100, with the new MIG technology, can be configured as 56 small GPUs, each faster than NVIDIA T4, all the way up to a giant eight-GPU server with 10 petaflops of AI performance.

NVIDIA also announced several updates to its software stack enabling application developers to take advantage of A100 GPU's innovations. They include new versions of more than 50 CUDA-X libraries used to accelerate graphics, simulation and AI; CUDA 11; NVIDIA Jarvis, a multimodal, conversational AI services framework; NVIDIA Merlin, a deep recommender application framework; and the NVIDIA HPC SDK, which includes compilers, libraries and tools that help HPC developers debug and optimize their code for A100.
Source: NVIDIA

Back to Table of contents

Primeur weekly 2020-05-18

Focus

SiPearl's chip will be a platform open to other start-ups to develop accelerators for - An interview with Philippe Notton from SiPearl ...

Exascale supercomputing

Atos launches first supercomputer equipped with NVIDIA A100 GPU ...

Quantum computing

Start of the Fraunhofer Competence Center "Quantum Computing Baden-Württemberg" ...

ADVA brings post-quantum security to packet networks ...

Atos delivers its Quantum Learning Machine to Japan ...

Machine learning cracks quantum chemistry conundrum ...

Making quantum 'waves' in ultrathin materials ...

Seeqc UK awarded GBP 1,8 million in grants to advance quantum computing initiatives ...

Rising investments in quantum technology to boost market growth ...

Total is exploring quantum algorithms to improve CO2 capture ...

Middleware

Mentor Graphics and the University of Basel join the OpenMP effort ...

Hardware

NVIDIA's new Ampere data centre GPU in full production ...

Karlsruhe Institute of Technology procures new supercomputer ...

DDN announces A3I suppport for NVIDIA DGX A100 ...

Yvonne Walker recognized as one of CRN's 2020 Women of the Channel ...

Supermicro expands portfolio with fully integrated NVIDIA A100 GPU-powered systems delivering 5 Petaflop/s of AI performance in a single 4U server ...

Artificial synapses on design ...

Light, fantastic: the path ahead for faster, smaller computer processors ...

OpenFabrics Alliance (OFA) and Gen-Z Consortium announce MoU agreement ...

Tachyum achieves 90 percent of silicon laid for its Prodigy Universal Processor ...

Intel Capital invests $132 million in 11 disruptive technology start-ups ...

Inspur releases five new AI servers powered by NVIDIA A100 Tensor Core GPUs ...

Research and Markets issues Global High-Performance Computing as a Service Market Analysis and Industry Forecasts to 2026 ...

Asperitas and maincubes partner to deliver immersion cooling solutions in dedicated colocation suites ...

Delivery of supercomputer Fugaku completed ...

Applications

Supercomputer simulations help advance electrochemical reaction research ...

3D-printed nuclear reactor promises faster, more economical path to nuclear energy ...

GCS centres support research to mitigate impact of COVID-19 pandemic ...

NIST scientists create new recipe for single-atom transistors ...

ACM service awards recognize leaders who strengthen the computing community ...

New research launched on airborne virus transmission in buildings ...

Seeing the universe through new lenses ...

Liqid delivers industry's fastest single-socket server ...

National Science Foundation grant backs funcX: A smart, automated delegator for computational research ...

Educating the next generation of supercomputer users with Blue Waters ...

The Cloud

Innovium introduces TERALYNX 8, world's highest performance programmable switch for data centre networks with 25,6 Tbps throughput and support for 112 Gbps SerDes I/O ...

Wiwynn unveils standalone rack-level liquid cooling solution for OCP ORV3 at 2020 OCP Virtual Summit ...

Crowd Machine Platform is powered by Oracle Cloud and now available in the Oracle Cloud Marketplace ...