Back to Table of contents

Primeur weekly 2013-06-24

Special

Human Brain Project to seek support in neuromorphic computing and non-volatile memory approach ...

Deploying new and more energy-efficient combustion technologies with exascale power ...

Parallelism, hybrid architectures, fault tolerance and power major challenges for extreme computing ...

The Cloud

Dell launches secure and flexible Cloud solution for U.S. Governments ...

Eurotech launches new release of Everyware Cloud to simplify device management in the Internet of Things ...

Thermax Ltd chooses IBM PureSystems and SmartCloud over Cisco and Dell ...

Cloud computing user privacy in serious need of reform, scholars say ...

VTI brings Internet of Things (IOT) and Cloud computing to test and measurement ...

Desktop Grids

Seeking testers for BOINC on Android ...

SETIspirit Windows GUI for SETI@home released ...

Using IBM's crowdsourced supercomputer, Harvard rates solar energy potential of 2.3 million new compounds ...

EuroFlash

projectiondesign ships ProNet.precision, camera-assisted warp and blend software ...

Remote Cluster Administration offers a unique solution to the HPC skills gap ...

New Cluster Installation Further Strengthens Regional HPC Infrastructure ...

Altair Engineering announces 8th UK Altair Technology Conference;to be held at the Heritage Motor Centre, Gaydon, Warwickshire ...

GENOA, MCQ-Composites to join Altair Partner Alliance Composites line-up ...

Altair broadens relationship with Siemens PLM Software to enhance data exchange for its CAE software users ...

Neuroscience to benefit from hybrid supercomputer memory ...

ISC'13 caps 28th Conference with new attendance, awards and more ...

USFlash

CANARIE upgrades 100G research & education network with Ciena ...

Linguists, computer scientists use supercomputers to improve natural language processing ...

UC San Diego launches new research computing programme ...

Which qubit my dear? New method to distinguish between neighbouring quantum bits ...

Making memories: Practical quantum computing moves closer to reality ...

Intel introducing new Lustre solution during Lustre event, addressing new Lustre markets ...

NetApp unveils clustered data ONTAP innovations that pave the way for software-defined storage ...

HP expands Converged Storage portfolio ...

UC San Diego researchers get access to Open Science Grid ...

HP extends support for OpenVMS through year 2020 ...

IBM expands support for Linux on Power Systems servers ...

Human Brain Project to seek support in neuromorphic computing and non-volatile memory approach


A 3D model of a neuron: reconstructed from lab data. The “sprouting” protuberances are “pre-synaptic terminals” – the points where the neuron will form connections (“synapses”) with other neurons ©EPFL/Blue Brain Project
18 Jun 2013 Leipzig - In the satellite event on Supercomputing and the Human Brain Project at the ISC'13 event in Leipzig, Tomas Shulthess from the Swiss Supercomputing Center (CSCS) and Karlheinz Meier from the University of Heidelberg described the current and future opportunities for new developments in HPC within the Human Brain Project (HBP) which will run for the next 10 years to come. Thomas Schulthess expanded on the HBP requirement for large amounts of non-volatile memory while Karlheinz Meier informed the audience about two worldwide unique general purpose neuromorphic computing systems with complementary approaches: the UK SpiNNaker project and the EU BrainScaleS project.

Karlheinz Meier distinguished between three types of platforms for future computing within HBP. The use of High Performance Computing in the project can provide interactive, visual exascale supercomputing in the years to come, to tackle the massive distributed volumes of heterogeneous data, and in convergence with neuromorphic technology.

Neuromorphic computing is a whole new concept and the first generic, large-scale neuromorphc systems will have yet to be built. The machines will go beyond Turing without any algorithmic operation and beyong von Neumann, providing an immerse computation in memory. A lot of different technologies will have to be integrated.

In addition, a neurorobotic platform will be needed, according to Karlheinz Meier, consisting of virtual robots with two-way, closed loop interfaces. They will have a link to brain models and neuromorphic systems. Developers will have to start building physical prototypes, followed by applications.

In simulations, mathematical abstraction is required in order to generate a synthesis of the physical model.

According to Karlheinz Meier the arguments for neuromorphic computing are the low power; the fault tolerance; the plasticity for learning and development, the speed, and the scalability.

The challenges ahead consist in the neuroscience knowledge and the flexibility to acquire this knowledge. Then there is the configurability problem when using technologies for distributed memory. A third challenge lies in the 3D integration density of nano-components. Developers also have to look for circuit re-use, using CAE tools. And last but not least, the user access has to be made friendly by building a unified software toolset.

Karlheinz Meier explained to the attendees how energy scales when used for a synaptic transmission with 14 orders of magnitude difference for the same thing, The physical models are typically 10 million times more energy efficient than in state-of-the-art computing. Temporal dynamics is key to understanding and using the computational paradigms of the brain.

Europe has an impressive past in this field with lots of projects, Karlheinz Meier stated.

The UK SpiNNaker project uses 18 ARM 968 cores per chip, for integer operations, with 200 MHz processor clock and a shared system RAM on die, and 128 Mbyte DRAM stacked on die. Each chip has six bi-directional links and 6 million spikes per second per link.

The EU BrainScaleS project provides a neural processing unit with up to 200.000 neurons and 50 million synapses. The system provides an acceleration of 10,000 beyond real time.

For the HBP project the developers also need control and communication FPGAs, and a control and communication board with digital communication ASICs, as well as a Neural Network Wafer for the wafer-scale integration of analogue neural networks.

Neuromorphic systems roadmap includes NM-PM-1 PM for physical systems and many-core systems.

Experiments will be set up in four categories of fields:

1. fundamental dynamical properties of isolated circuits

2. implementing and testing fundamental, generic concepts and theories

3. biologically realistic, reverse engineered circuits in closed loops

4. generic neuromorphic computing outside neuroscience

Within HPB, the scientists want to reduce the complexity. The question is: how far can they go?, concluded Karlheinz Meier.

Thomas Schulthess from CSCS informed the audience which HPC sites are involved within HBP. They include CINECA, KIT, CSCS, BSC, and Juelich.

The focus is put on cellular brain simulations but molecular dynamics codes are equally important. The developers will be scaling neuron-based simulations of the HPB, from a single cell model to the cellular neocortical column to the cellular mesocircuit, and even on to the cellular rodent brain, and finally the cellular human brain.

What is this cellular brain model made of?, Thomas Schulthess went on. We have the individual neuron with dendrites, soma, and axon. To simulate this, 1MB is the minimal memory requirement. These elements are connnected to other neurons via synapses. The neocortical column of rat has more than 30 thousand neurons and requires 30 GB of memory for simulation. The mouse brain has more than 70 million neurons and needs 70 TB of memory, whereas the human brain has more than 90 billion neurons. Here, 90 PB of memory are required.

The researchers will be abstracting the electrical properties of a neuron in a multi-compartment model by

connecting multi-compartment neurons into a network. This requires a gigantic set of coupled ODEs and the use of an implicit method for time integration. The researchers need to solve a gigantic sparse linear system of equations. The time step is 0.025 ms, the simulation time is seconds to minutes, to hours, to years, and so on, Thomas Schulthess explained.

The researchers will opt for the parallelization strategy and take advantage of the known topology and physics of signal propagation, in order to distribute sparse matrix over nodes and break the large linear system into smaller chunks. In this way, they are able to parallelize over neurons and over compartments.

Thomas Schulthess explained that the challenge consists in creating an arithmetic density of simulations with the NEURON code in the HBP. There are dense linear algebra problems and sparse linear problems to overcome. The neuron simulations currently have about 1/4 Flops per load-store but Flops is not a useful metric for the HBP.

Low arithmetic density means we are bound by memory bandwidth, Thomas Schulthess stated. Since the memory footprint drives the hardware roadmap, the researchers have to consider an alternative approach with active storage. They have to compute an intensive simulation with a minimal memory footprint, running on a bandwidth-optimized, tightly-coupled parallel supercomputer.

The researchers need active storage where large amounts of data are held in a non-volatile solid state memory. This storage system contains compute nodes for introspection.

The project will use regular data centre storage, using disks. The active storage concept includes scalable, solid state storage with the BlueGene/Q supercomputer. IBM's BlueGene system looks like a perfect match for what the researchers need to satisfy the large data needs of neuron simulations, according to Thomas Schulthess.

An alternative possibility consists in adding NV-memory to hybrid-multicore (HMC) nodes. Given the low arithmetic density and large concurrency, the researchers need lots of throughput optimized cores and appropriate amounts of bandwidth optimized memory. In fact, they will need lots of non-volatile memory.

Thomas Schulthess concluded that cellular neuron-model based simulations at the scale of the entire mammalian brain seem within range. The simulations are solving a sparse linear system of equations. The

algorithms take advantage of a known topology and physical properties of the neuron network. Scaling is not an issue. The arithmetic density is bellow one Flops per load-store. So the simulations are limited by memory footprint, not by Flops. The large memory requirement is driven by introspection.

The researchers will need to explore active storage technologies to reduce RAM but there is room for other memory and storage technologies.

More information is available at the Human Brain Project website.

Leslie Versweyveld

Back to Table of contents

Primeur weekly 2013-06-24

Special

Human Brain Project to seek support in neuromorphic computing and non-volatile memory approach ...

Deploying new and more energy-efficient combustion technologies with exascale power ...

Parallelism, hybrid architectures, fault tolerance and power major challenges for extreme computing ...

The Cloud

Dell launches secure and flexible Cloud solution for U.S. Governments ...

Eurotech launches new release of Everyware Cloud to simplify device management in the Internet of Things ...

Thermax Ltd chooses IBM PureSystems and SmartCloud over Cisco and Dell ...

Cloud computing user privacy in serious need of reform, scholars say ...

VTI brings Internet of Things (IOT) and Cloud computing to test and measurement ...

Desktop Grids

Seeking testers for BOINC on Android ...

SETIspirit Windows GUI for SETI@home released ...

Using IBM's crowdsourced supercomputer, Harvard rates solar energy potential of 2.3 million new compounds ...

EuroFlash

projectiondesign ships ProNet.precision, camera-assisted warp and blend software ...

Remote Cluster Administration offers a unique solution to the HPC skills gap ...

New Cluster Installation Further Strengthens Regional HPC Infrastructure ...

Altair Engineering announces 8th UK Altair Technology Conference;to be held at the Heritage Motor Centre, Gaydon, Warwickshire ...

GENOA, MCQ-Composites to join Altair Partner Alliance Composites line-up ...

Altair broadens relationship with Siemens PLM Software to enhance data exchange for its CAE software users ...

Neuroscience to benefit from hybrid supercomputer memory ...

ISC'13 caps 28th Conference with new attendance, awards and more ...

USFlash

CANARIE upgrades 100G research & education network with Ciena ...

Linguists, computer scientists use supercomputers to improve natural language processing ...

UC San Diego launches new research computing programme ...

Which qubit my dear? New method to distinguish between neighbouring quantum bits ...

Making memories: Practical quantum computing moves closer to reality ...

Intel introducing new Lustre solution during Lustre event, addressing new Lustre markets ...

NetApp unveils clustered data ONTAP innovations that pave the way for software-defined storage ...

HP expands Converged Storage portfolio ...

UC San Diego researchers get access to Open Science Grid ...

HP extends support for OpenVMS through year 2020 ...

IBM expands support for Linux on Power Systems servers ...