Back to Table of contents

Primeur weekly 2018-02-26

Focus

The European Processor Initiative (EPI) to develop the processor that will be at the heart of the European exascale supercomputer effort ...

Quantum computing

Unconventional superconductor may be used to create quantum computers of the future ...

Developing reliable quantum computers: International research team makes important step on the path to solving certification problems ...

Programming on a silicon quantum chip ...

D-Wave locks in $20 million funding and completes prototype of next-gen quantum processor ...

Focus on Europe

Luxembourg joins the European supercomputer network PRACE ...

ICEI Public Information Event on the realisation of a federated HPC and data analytics infrastructure ...

Dutch eScience and Lorentz Centers launch Call to host workshop on digitally enhanced research ...

Middleware

Adaptive Computing announces release of Moab HPC Suite 9.1.2 ...

NCSA Assistant Director Dan Katz named BSSw Fellow ...

University of Nevada, Las Vegas' supercomputing boosted through new collaboration with Altair ...

Hardware

C-DAC focusing on cancer treatment using supercomputers as a tool ...

South-Korean Ministry of Science and Technology to announce Second National High-Performance Computing Fundamental Plan ...

IBM reveals novel energy-saving optical receiver with a new record of rapid power-on/off time ...

Computers learn to learn: Intel and researchers from Heidelberg and Dresden present three new neuromorphic chips ...

HPE helps U.S. Department of Defense to advance national defense capabilities ...

OCF deploys petascale Lenovo supercomputer at University of Southampton ...

HPE reports fiscal 2018 first quarter results ...

Ranovus announces general availability of its on-board optics and CFP2 direct detect transceiver products for 5G mobility and data centre interconnect applications ...

Mellanox appoints Steve Sanghi and Umesh Padval to Board of Directors ...

Applications

Supercomputers aid discovery of new, inexpensive material to make LEDs with high colour quality ...

AI companies to reuse crypto mining farms for deep learning in health care ...

Computer scientists and materials researchers collaborate to optimize steel classification ...

Boris Kaus receives ERC Consolidator Grant for his research in magmatic processes ...

Metabolic modelling becomes three-dimensional ...

Powerful supercomputer unlocks possibilities for tinier devices and affordable DNA sequencing ...

New Berkeley Lab algorithms create "Minimalist Machine Learning" that analyzes images from very little information ...

The Cloud

Adaptive Computing makes HPC Cloud strategies more accessible with the Moab/NODUS Cloud Bursting 1.1.0 release ...

USFlash

Gen-Z Consortium announces the public release of its Core Specification 1.0 ...

New Berkeley Lab algorithms create "Minimalist Machine Learning" that analyzes images from very little information


Slice of mouse llymphoblastoid cells. Raw data (a): corresponding manual segmentation (b) and output of an MS-D network with 100 layers. Data from A. Ekman, C. Larabell, National Center for X-ray Tomography.
21 Feb 2018 Berkeley - Mathematicians at the Department of Energy's Lawrence Berkeley National Laboratory have developed a new approach to machine learning aimed at experimental imaging data. Rather than relying on the tens or hundreds of thousands of images used by typical machine learning methods, this new approach "learns" much more quickly and requires far fewer images.

Daniël Pelt and James Sethian of Berkeley Lab's Center for Advanced Mathematics for Energy Research Applications (CAMERA) turned the usual machine learning perspective on its head by developing what they call a "Mixed-Scale Dense Convolution Neural Network" (MS-D) that requires far fewer parameters than traditional methods, converges quickly, and has the ability to "learn" from a remarkably small training set. Their approach is already being used to extract biological structure from cell images, and is poised to provide a major new computational tool to analyze data across a wide range of research areas.

As experimental facilities generate higher resolution images at higher speeds, scientists can struggle to manage and analyze the resulting data, which is often done painstakingly by hand. In 2014, James Sethian established CAMERA at Berkeley Lab as an integrated, cross-disciplinary centre to develop and deliver fundamental new mathematics required to capitalize on experimental investigations at DOE Office of Science user facilities. CAMERA is part of the lab's Computational Research Division.

"In many scientific applications, tremendous manual labour is required to annotate and tag images - it can take weeks to produce a handful of carefully delineated images", stated James Sethian, who is also a mathematics professor at the University of California, Berkeley. "Our goal was to develop a technique that learns from a very small data set."

Details of the algorithm were published December 26, 2017 in a paper in the Proceedings of the National Academy of Sciences .

"The breakthrough resulted from realizing that the usual downscaling and upscaling that capture features at various image scales could be replaced by mathematical convolutions handling multiple scales within a single layer", stated Daniël Pelt, who is also a member of the Computational Imaging Group at the Centrum Wiskunde & Informatica, the national research institute for mathematics and computer science in the Netherlands.

To make the algorithm accessible to a wide set of researchers, a Berkeley team led by Olivia Jain and Simon Mo built a web portal " Segmenting Labeled Image Data Engine " (SlideCAM) as part of the CAMERA suite of tools for DOE experimental facilities.

One promising application is in understanding the internal structure of biological cells and a project in which Daniël Pelt's and James Sethian's MS-D method needed only data from seven cells to determine the cell structure.

"In our laboratory, we are working to understand how cell structure and morphology influences or controls cell behaviour. We spend countless hours hand-segmenting cells in order to extract structure, and identify, for example, differences between healthy vs. diseased cells", stated Carolyn Larabell, Director of the National Center for X-ray Tomography and Professor at the University of California San Francisco School of Medicine. "This new approach has the potential to radically transform our ability to understand disease, and is a key tool in our new Chan-Zuckerberg-sponsored project to establish a Human Cell Atlas , a global collaboration to map and characterize all cells in a healthy human body."

Images are everywhere. Smartphones and sensors have produced a treasure trove of pictures, many tagged with pertinent information identifying content. Using this vast database of cross-referenced images, convolutional neural networks and other machine learning methods have revolutionized our ability to quickly identify natural images that look like ones previously seen and catalogued.

These methods "learn" by tuning a stunningly large set of hidden internal parameters, guided by millions of tagged images, and requiring large amounts of supercomputer time. But what if you don't have so many tagged images? In many fields, such a database is an unachievable luxury. Biologists record cell images and painstakingly outline the borders and structure by hand: it's not unusual for one person to spend weeks coming up with a single fully three-dimensional image. Materials scientists use tomographic reconstruction to peer inside rocks and materials, and then roll up their sleeves to label different regions, identifying cracks, fractures, and voids by hand. Contrasts between different yet important structures are often very small and "noise" in the data can mask features and confuse the best of algorithms (and humans).

These precious hand-curated images are nowhere near enough for traditional machine learning methods. To meet this challenge, mathematicians at CAMERA attacked the problem of machine learning from very limited amounts of data. Trying to do "more with less", their goal was to figure out how to build an efficient set of mathematical "operators" that could greatly reduce the number of parameters. These mathematical operators might naturally incorporate key constraints to help in identification, such as by including requirements on scientifically plausible shapes and patterns.

Many applications of machine learning to imaging problems use deep convolutional neural networks (DCNNs), in which the input image and intermediate images are convolved in a large number of successive layers, allowing the network to learn highly nonlinear features. To achieve accurate results for difficult image processing problems, DCNNs typically rely on combinations of additional operations and connections including, for example, downscaling and upscaling operations to capture features at various image scales. To train deeper and more powerful networks, additional layer types and connections are often required. Finally, DCNNs typically use a large number of intermediate images and trainable parameters, often more than 100 million, to achieve results for difficult problems.

Instead, the new "Mixed-Scale Dense" network architecture avoids many of these complications and calculates dilated convolutions as a substitute to scaling operations to capture features at various spatial ranges, employing multiple scales within a single layer, and densely connecting all intermediate images. The new algorithm achieves accurate results with few intermediate images and parameters, eliminating both the need to tune hyperparameters and additional layers or connections to enable training.

A different challenge is to produce high resolution images from low resolution input. As anyone who has tried to enlarge a small photo and found it only gets worse as it gets bigger, this sounds close to impossible. But a small set of training images processed with a Mixed-Scale Dense network can provide real headway. As an example, imagine trying to denoise tomographic reconstructions of a fiber-reinforced mini-composite material. In an experiment described in the paper, images were reconstructed using 1,024 acquired X-ray projections to obtain images with relatively low amounts of noise. Noisy images of the same object were then obtained by reconstructing using 128 projections. Training inputs were noisy images, with corresponding noiseless images used as target output during training. The trained network was then able to effectively take noisy input data and reconstruct higher resolution images.

Daniël Pelt and James Sethian are taking their approach to a host of new areas, such as fast real-time analysis of images coming out of synchrotron light sources and reconstruction problems in biological reconstruction such as for cells and brain mapping.

"These new approaches are really exciting, since they will enable the application of machine learning to a much greater variety of imaging problems than currently possible", Daniël Pelt stated. "By reducing the amount of required training images and increasing the size of images that can be processed, the new architecture can be used to answer important questions in many research fields."

CAMERA is supported by the offices of Advanced Scientific Computing Research and Basic Energy Sciences in the Department of Energy's Office of Science. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

Source: Lawrence Berkeley National Laboratory

Back to Table of contents

Primeur weekly 2018-02-26

Focus

The European Processor Initiative (EPI) to develop the processor that will be at the heart of the European exascale supercomputer effort ...

Quantum computing

Unconventional superconductor may be used to create quantum computers of the future ...

Developing reliable quantum computers: International research team makes important step on the path to solving certification problems ...

Programming on a silicon quantum chip ...

D-Wave locks in $20 million funding and completes prototype of next-gen quantum processor ...

Focus on Europe

Luxembourg joins the European supercomputer network PRACE ...

ICEI Public Information Event on the realisation of a federated HPC and data analytics infrastructure ...

Dutch eScience and Lorentz Centers launch Call to host workshop on digitally enhanced research ...

Middleware

Adaptive Computing announces release of Moab HPC Suite 9.1.2 ...

NCSA Assistant Director Dan Katz named BSSw Fellow ...

University of Nevada, Las Vegas' supercomputing boosted through new collaboration with Altair ...

Hardware

C-DAC focusing on cancer treatment using supercomputers as a tool ...

South-Korean Ministry of Science and Technology to announce Second National High-Performance Computing Fundamental Plan ...

IBM reveals novel energy-saving optical receiver with a new record of rapid power-on/off time ...

Computers learn to learn: Intel and researchers from Heidelberg and Dresden present three new neuromorphic chips ...

HPE helps U.S. Department of Defense to advance national defense capabilities ...

OCF deploys petascale Lenovo supercomputer at University of Southampton ...

HPE reports fiscal 2018 first quarter results ...

Ranovus announces general availability of its on-board optics and CFP2 direct detect transceiver products for 5G mobility and data centre interconnect applications ...

Mellanox appoints Steve Sanghi and Umesh Padval to Board of Directors ...

Applications

Supercomputers aid discovery of new, inexpensive material to make LEDs with high colour quality ...

AI companies to reuse crypto mining farms for deep learning in health care ...

Computer scientists and materials researchers collaborate to optimize steel classification ...

Boris Kaus receives ERC Consolidator Grant for his research in magmatic processes ...

Metabolic modelling becomes three-dimensional ...

Powerful supercomputer unlocks possibilities for tinier devices and affordable DNA sequencing ...

New Berkeley Lab algorithms create "Minimalist Machine Learning" that analyzes images from very little information ...

The Cloud

Adaptive Computing makes HPC Cloud strategies more accessible with the Moab/NODUS Cloud Bursting 1.1.0 release ...

USFlash

Gen-Z Consortium announces the public release of its Core Specification 1.0 ...