Back to Table of contents

Primeur weekly 2020-05-25

Focus

InsideHPC's Rich Brueckner passes away at 58 ...

Only half of the EuroHPC countries confirmed they will participate in co-funding EuroHPC projects ...

Quantum computing

Atos and CSC empower the Finnish quantum research community with Atos Quantum Learning Machine ...

Quantum Hall effect 'reincarnated' in 3D topological materials ...

University of California Los Angeles physicists develop world's best quantum bits ...

Quantum leap: Photon discovery is a major step toward at-scale quantum technologies ...

Focus on Europe

HPC-AI Advisory Council and ISC Group continue Annual Student Cluster Competition as an online event ...

ELASTIC software architecture advances urban mobility in Florence ...

Middleware

Waste energy of LUMI supercomputer produces 20 percent of the district heat of Kajaani ...

Italy-based Do IT Systems becomes Bright Services Partner ...

Hardware

Microsoft claims to have built new supercomputer for OpenAI ...

New York University and IBM research takes electrons for a spin in moving toward more efficient, higher density data ...

GRC wins Phase I of AFWERX Programme ...

Samsung Electronics expands its foundry capacity with a new production line in Pyeongtaek, Korea ...

HPE reports Q2 results ...

Intersect360 Research invites your input in an all new survey ...

Applications

Scientists seeking therapies for deadly bacterial disease ...

2020 OLCF User Meeting will happen remote together ...

Summit supercomputer is mining for COVID-19 connections ...

A new method for unraveling complex gene interactions ...

ACM honours computing innovators for advances in research, education and industry ...

Discovery about the edge of fusion plasma could help realize fusion power ...

Superconductors with 'zeitgeist' - When materials differentiate between past and future ...

NIST team builds hybrid quantum system by entangling molecule with atom ...

BIOS IT partners with Panasas to deliver ActiveStor Ultra turnkey HPC storage appliances ...

Supercomputer model simulations reveal cause of Neanderthal extinction ...

Pittsburgh Supercomputing Center is supporting COVID-19 research ...

The Cloud

New technique separates industrial noise from natural seismic signals ...

Lyell Immunopharma goes all-in on AWS as its Cloud provider ...

WekaIO furthers Weka AI by integrating with deep learning pipeline management solution from Valohai ...

Microsoft claims to have built new supercomputer for OpenAI


At its Build developers conference, Microsoft Chief Technical Officer Kevin Scott is announcing that the company has built one of the top five publicly disclosed supercomputers in the world. Art by Craighton Berman.
19 May 2020 Redmond - In a recent blog Jennifer Langston, who writes about Microsoft research and innovation, is telling that Microsoft announced at its Build developers conference that it has built one of the top five publicly disclosed supercomputers in the world, making new infrastructure available in Azure to train extremely large artificial intelligence models. Built in collaboration with and exclusively for OpenAI, the supercomputer hosted in Azure was designed specifically to train their AI models. It represents a key milestone in a partnership announced last year to jointly create new supercomputing technologies in Azure.

This is also a first step toward making the next generation of very large AI models and the infrastructure needed to train them available as a platform for other organisations and developers to build upon.

"The exciting thing about these models is the breadth of things they're going to enable", stated Microsoft Chief Technical Officer Kevin Scott, who said the potential benefits extend far beyond narrow advances in one type of AI model.

"This is about being able to do a hundred exciting things in natural language processing at once and a hundred exciting things in computer vision, and when you start to see combinations of these perceptual domains, you're going to have new applications that are hard to even imagine right now", he stated.

Machine learning experts have historically built separate, smaller AI models that use many labeled examples to learn a single task such as translating between languages, recognizing objects, reading text to identify key points in an e-mail or recognizing speech well enough to deliver today’s weather report when asked.

A new class of models developed by the AI research community has proven that some of those tasks can be performed better by a single massive model - one that learns from examining billions of pages of publicly available text, for example. This type of model can so deeply absorb the nuances of language, grammar, knowledge, concepts and context that it can excel at multiple tasks: summarizing a lengthy speech, moderating content in live gaming chats, finding relevant passages across thousands of legal files or even generating code from scouring GitHub, Jennifer Langston explains in her blog.

As part of a company-wide AI at Scale initiative, Microsoft has developed its own family of large AI models, the Microsoft Turing models, which the company has used to improve many different language understanding tasks across Bing, Office, Dynamics and other productivity products. Earlier this year, it also released to researchers the largest publicly available AI language model in the world, the Microsoft Turing model for natural language generation.

The goal, according to Microsoft, is to make its large AI models, training optimization tools and supercomputing resources available through Azure AI services and GitHub so developers, data scientists and business customers can easily leverage the power of AI at Scale.

"By now most people intuitively understand how personal computers are a platform - you buy one and it's not like everything the computer is ever going to do is built into the device when you pull it out of the box", Kevin Scott stated.

"That's exactly what we mean when we say AI is becoming a platform", he stated. "This is about taking a very broad set of data and training a model that learns to do a general set of things and making that model available for millions of developers to go figure out how to do interesting and creative things with."

Training massive AI models requires advanced supercomputing infrastructure, or clusters of state-of-the-art hardware connected by high-bandwidth networks. It also needs tools to train the models across these interconnected computers, Jennifer Langston writes.

The supercomputer developed for OpenAI is a single system with more than 285.000 CPU cores, 10.000 GPUs and 400 gigabits per second of network connectivity for each GPU server. Compared with other machines listed on the TOP500 supercomputers in the world, it ranks in the top five, according to Microsoft. Hosted in Azure, the supercomputer also benefits from all the capabilities of a robust modern Cloud infrastructure, including rapid deployment, sustainable data centres and access to Azure services.

"As we've learned more and more about what we need and the different limits of all the components that make up a supercomputer, we were really able to say, 'If we could design our dream system, what would it look like?'" statedd OpenAI CEO Sam Altman. "And then Microsoft was able to build it."

OpenAI's goal is not just to pursue research breakthroughs but also to engineer and develop powerful AI technologies that other people can use, Sam Altman said. The supercomputer developed in partnership with Microsoft was designed to accelerate that cycle.

"We are seeing that larger-scale systems are an important component in training more powerful models", Sam Altman stated.

For customers who want to push their AI ambitions but who don't require a dedicated supercomputer, Azure AI provides access to powerful compute with the same set of AI accelerators and networks that also power the supercomputer. Microsoft is also making available the tools to train large AI models on these clusters in a distributed and optimized way, Jennifer Langston writes.

At its Build conference, Microsoft announced that it would soon begin open sourcing its Microsoft Turing models, as well as recipes for training them in Azure Machine Learning. This will give developers access to the same family of powerful language models that the company has used to improve language understanding across its products.

It also unveiled a new version of DeepSpeed, an open source deep learning library for PyTorch that reduces the amount of computing power needed for large distributed model training. The update is significantly more efficient than the version released just three months ago and now allows people to train models more than 15 times larger and 10 times faster than they could without DeepSpeed on the same infrastructure.

Along with the DeepSpeed announcement, Microsoft announced it has added support for distributed training to the ONNX Runtime. The ONNX Runtime is an open source library designed to enable models to be portable across hardware and operating systems. To date, the ONNX Runtime has focused on high-performance inferencing; today's update adds support for model training, as well as adding the optimizations from the DeepSpeed library, which enable performance improvements of up to 17 times over the current ONNX Runtime.

"We want to be able to build these very advanced AI technologies that ultimately can be easily used by people to help them get their work done and accomplish their goals more quickly", stated Microsoft principal programme manager Phil Waymouth. "These large models are going to be an enormous accelerant."

More information is available in Jennifer Langston's blog .

Source: Microsoft

Back to Table of contents

Primeur weekly 2020-05-25

Focus

InsideHPC's Rich Brueckner passes away at 58 ...

Only half of the EuroHPC countries confirmed they will participate in co-funding EuroHPC projects ...

Quantum computing

Atos and CSC empower the Finnish quantum research community with Atos Quantum Learning Machine ...

Quantum Hall effect 'reincarnated' in 3D topological materials ...

University of California Los Angeles physicists develop world's best quantum bits ...

Quantum leap: Photon discovery is a major step toward at-scale quantum technologies ...

Focus on Europe

HPC-AI Advisory Council and ISC Group continue Annual Student Cluster Competition as an online event ...

ELASTIC software architecture advances urban mobility in Florence ...

Middleware

Waste energy of LUMI supercomputer produces 20 percent of the district heat of Kajaani ...

Italy-based Do IT Systems becomes Bright Services Partner ...

Hardware

Microsoft claims to have built new supercomputer for OpenAI ...

New York University and IBM research takes electrons for a spin in moving toward more efficient, higher density data ...

GRC wins Phase I of AFWERX Programme ...

Samsung Electronics expands its foundry capacity with a new production line in Pyeongtaek, Korea ...

HPE reports Q2 results ...

Intersect360 Research invites your input in an all new survey ...

Applications

Scientists seeking therapies for deadly bacterial disease ...

2020 OLCF User Meeting will happen remote together ...

Summit supercomputer is mining for COVID-19 connections ...

A new method for unraveling complex gene interactions ...

ACM honours computing innovators for advances in research, education and industry ...

Discovery about the edge of fusion plasma could help realize fusion power ...

Superconductors with 'zeitgeist' - When materials differentiate between past and future ...

NIST team builds hybrid quantum system by entangling molecule with atom ...

BIOS IT partners with Panasas to deliver ActiveStor Ultra turnkey HPC storage appliances ...

Supercomputer model simulations reveal cause of Neanderthal extinction ...

Pittsburgh Supercomputing Center is supporting COVID-19 research ...

The Cloud

New technique separates industrial noise from natural seismic signals ...

Lyell Immunopharma goes all-in on AWS as its Cloud provider ...

WekaIO furthers Weka AI by integrating with deep learning pipeline management solution from Valohai ...