Back to Table of contents

Primeur weekly 2013-07-15

Special

Supercomputer acting as a discovery machine for neuroscience ...

Exascale supercomputing

Jack Dongarra helps design software for the next generation of supercomputers receiving $1 million grant ...

The Cloud

e-Infrastructures for e-Sciences 2013 ...

EastWest Bank chooses HP to build private Cloud ...

IBM accelerates Cloud computing on System z with acquisition of CSL International ...

IBM closes acquisition of SoftLayer Technologies ...

L'Oréal Uruguay consolidates IT infrastructure into the private Cloud and reduces power consumption with IBM PureSystems ...

Oracle announces availability of Cloud Application Foundation, the #1 application foundation across conventional and Cloud environments ...

Oracle expands support for mobile and Cloud technologies with the latest Java development tools and framework ...

US Ignite recognizes RENCI and NC State for innovative app for monitoring power grids ...

EuroFlash

MINES ParisTech selects Bright Cluster Manager for materials science research cluster ...

European Commission to launch survey about Future Internet Assembly ...

Bull launches StoreWay Optima 4600: reaffirming its presence in the new-generation data handling market ...

DANTE celebrates 20 years of networking excellence ...

Oracle Enterprise Manager 12c deployed by CERN to manage its Oracle infrastructure ...

PRACE Winter School 2014 to be held in Tel Aviv, Israel ...

USFlash

Professor Jack Dongarra announces new supercomputer benchmark ...

CASL milestone validates reactor model using TVA data ...

DOST to strengthen weather forecasting to benefit farmers ...

Fujitsu M10 achieves world-record result on two-tier SAP SD standard application benchmark ...

Fujitsu integrates internal database platform using SPARC M10 servers ...

1Wealth Trading Company opines on IBM's foray into global banking ...

Golf Channel selects Oracle's Pillar Axiom Storage System to support rapid growth and new reality programming ...

Stephen F. Austin State University readies for growth; speeds registration, adds 8X capacity and unlocks IT for strategic projects ...

TACC supercomputers help microfluidics researchers make waves at the microscopic level ...

Supercomputer acting as a discovery machine for neuroscience


19 Jun 2013 Leipzig - In the session about "Better Understanding Brains, Genomes and Life Using HPC Systems" at the ISC'13 event in Leipzig, Markus Diesmann from the Institute of Neuroscience and Medicine (INM) and the Institute for Advanced Simulation (IAS) in Jülich and RWTH Aachen, talked about the simulation technology he and his colleagues are using for brain-scale neuronal networks. He presented a model of a local cortical network to explain the basic dynamical properties. The model was severely underconstrained since only 50% of the connections were local. The functional loops were only closed at brain-scale. The production code for 108 neurons however is available. The 109 code is still under development. The research team made no compromise on generality but the memory is limited. Using short run times, supercomputers have revealed themselves as a discovery machine for neuroscience. Finally, Markus Diesmann also provided some concepts for exascale computing.

Markus Diesmann started off with the fundamental interactions. The current injection into a pre-synaptic neuron causes excursions of the membrane potential. The supra-threshold value causes spike transmitted to the post-synaptic neuron. The post-synaptic neuron responds with a small excursion of the potential after a delay. The inhibitory neurons (20%) cause a negative excursion. Each neuron receives input from 10,000 other neurons, causing large fluctuations of the membrane potential. The emission rate is 1 to 10 spikes per second, as Markus Diesmann showed the audience.

The speaker elaborated on the feasibility and the structural constraints. He showed a minimal layered cortical network model of 1 mm3 with 1 billion synapses and 100.000 neurons. There are two populations of neurons per layer. There is a laterally homogeneous connectivity. There is a consistency in the connection probabilities, explained Markus Diesmann. A correction for the sampling radius is executed using a Gaussian model of distance dependents.

The Diesmann team has set up a collaboration, called the NEST initiative. The major goals are to systematically publish new simulation technology and produce public releases under GPL. It is a collatoration of several labs since 2001 with a registered society since 2012. The partners are teaching in international advanced courses and the core simualtion technology is used in the Human Brain Project.

Markus Diesmann showed the activity of the local cortical microcircuit. Taking into account the layer and the neuron-type specific connectivity is sufficient to reproduce experimentally observed the asynchronous-irregular spiking of neurons; the higher spike rate of inhibitory neurons; and the correct distribution of spike rates across layers.

The speaker also showed the response to transient inputs. The researchers have built an hypothesis on the cortical flow of activity. There is a handshaking between layers. This constitutes a building block for functional studies, as well as a building block for mesoscopic studies.

Markus Diesmann presented a few pictures of the brain-scale connectivity, where a major part of the synapses is missing in the local cortical network and where many synapses are missing in the cortical area network. He was criticizing the model which shows constraints.

The speaker presented the architecture of the human cortex as a network of networks with at least three levels of organisation, namely the connectivity of the local microcircuit; the within-area connectivity with space constant; and the long-range connections between areas.

The brain-scale networks provide a substrate for mesoscopic measures such as local field potential and voltage sensitive dyes, and for macroscopic measures such as EEG, MEG, and fMRI resting state networks. The researchers are now connecting the microscopic models to the imaging data. The next steps consist in developing efficient wiring routines for spatially structured networks and in constructing mesoscopic measures, explained Markus Diesmann.

The researchers have tried to scale up to networks of 10 to 9 neurons. The scale-up on the K computer was guided by 3 milestones: port NEST software to K; a scale of 10 to 8 neurons; and an attempt towards brain-scale.

The scale of 10 to 8 neurons was relevant for the size of the largest area and enabled the researchers to visualize the cortex model respecting the relative sizes, Markus Diesmann told the audience, thanks to the co-development with the K computer.

The speaker explained the characteristics of brain simulations. The memory overhead increases with cores. It is the memory and not the simulation time that limits the network size. The intention is to use full memory resources for maximum-filling scaling. The analysis is based on a mathematical model of memory consumption. At different scales, different components of the software dominate the memory consumption.

Markus Diesmann showed the memory layout of the 3G and 4G kernel. The 3G memory layout accounts for sparseness in the neuronal and connection data structures. In the 4G memory layout, data structures account for heterogeneity of synaptic dynamics. For more than 10,000 cores, neurons with few local targets cause a severe overhead. A novel adaptive data structure copes with short target lists. The researchers do not want to compromise on generality, the speaker stated.

Markus Diesmann also showed how to measure the scalability. A faster element update leads to worse scaling as communication dominates the runtime already at fewer cores. Or when it is the other way round: a better scaling can be achieved by using an algorithm which performs a slower element update.

The researchers are confronted with limited memory resources. The network just fits on M cores. When the researchers want to improve the memory consumption, they have to fit a larger network on M cores. The communication only dominates at a larger number of cores but it provides better scaling. In the extreme case, the same network can be simulated faster on fewer cores, as the speaker showed.

The aim is to generate full-scale models at a cellular and synaptic resolution with maximum-filling benchmarks. One percent of the human brain can be simulated on petascale computers. Supercomputers are required to aggregate memory for synapses and to organize the interaction, Markus Diesmann stated.

In visualization, complex and massively parallel data require new visualization and analysis tools, concluded Markus Diesmann.

Leslie Versweyveld

Back to Table of contents

Primeur weekly 2013-07-15

Special

Supercomputer acting as a discovery machine for neuroscience ...

Exascale supercomputing

Jack Dongarra helps design software for the next generation of supercomputers receiving $1 million grant ...

The Cloud

e-Infrastructures for e-Sciences 2013 ...

EastWest Bank chooses HP to build private Cloud ...

IBM accelerates Cloud computing on System z with acquisition of CSL International ...

IBM closes acquisition of SoftLayer Technologies ...

L'Oréal Uruguay consolidates IT infrastructure into the private Cloud and reduces power consumption with IBM PureSystems ...

Oracle announces availability of Cloud Application Foundation, the #1 application foundation across conventional and Cloud environments ...

Oracle expands support for mobile and Cloud technologies with the latest Java development tools and framework ...

US Ignite recognizes RENCI and NC State for innovative app for monitoring power grids ...

EuroFlash

MINES ParisTech selects Bright Cluster Manager for materials science research cluster ...

European Commission to launch survey about Future Internet Assembly ...

Bull launches StoreWay Optima 4600: reaffirming its presence in the new-generation data handling market ...

DANTE celebrates 20 years of networking excellence ...

Oracle Enterprise Manager 12c deployed by CERN to manage its Oracle infrastructure ...

PRACE Winter School 2014 to be held in Tel Aviv, Israel ...

USFlash

Professor Jack Dongarra announces new supercomputer benchmark ...

CASL milestone validates reactor model using TVA data ...

DOST to strengthen weather forecasting to benefit farmers ...

Fujitsu M10 achieves world-record result on two-tier SAP SD standard application benchmark ...

Fujitsu integrates internal database platform using SPARC M10 servers ...

1Wealth Trading Company opines on IBM's foray into global banking ...

Golf Channel selects Oracle's Pillar Axiom Storage System to support rapid growth and new reality programming ...

Stephen F. Austin State University readies for growth; speeds registration, adds 8X capacity and unlocks IT for strategic projects ...

TACC supercomputers help microfluidics researchers make waves at the microscopic level ...