Markus Diesmann started off with the fundamental interactions. The current injection into a pre-synaptic neuron causes excursions of the membrane potential. The supra-threshold value causes spike transmitted to the post-synaptic neuron. The post-synaptic neuron responds with a small excursion of the potential after a delay. The inhibitory neurons (20%) cause a negative excursion. Each neuron receives input from 10,000 other neurons, causing large fluctuations of the membrane potential. The emission rate is 1 to 10 spikes per second, as Markus Diesmann showed the audience.
The speaker elaborated on the feasibility and the structural constraints. He showed a minimal layered cortical network model of 1 mm3 with 1 billion synapses and 100.000 neurons. There are two populations of neurons per layer. There is a laterally homogeneous connectivity. There is a consistency in the connection probabilities, explained Markus Diesmann. A correction for the sampling radius is executed using a Gaussian model of distance dependents.
The Diesmann team has set up a collaboration, called the NEST initiative. The major goals are to systematically publish new simulation technology and produce public releases under GPL. It is a collatoration of several labs since 2001 with a registered society since 2012. The partners are teaching in international advanced courses and the core simualtion technology is used in the Human Brain Project.
Markus Diesmann showed the activity of the local cortical microcircuit. Taking into account the layer and the neuron-type specific connectivity is sufficient to reproduce experimentally observed the asynchronous-irregular spiking of neurons; the higher spike rate of inhibitory neurons; and the correct distribution of spike rates across layers.
The speaker also showed the response to transient inputs. The researchers have built an hypothesis on the cortical flow of activity. There is a handshaking between layers. This constitutes a building block for functional studies, as well as a building block for mesoscopic studies.
Markus Diesmann presented a few pictures of the brain-scale connectivity, where a major part of the synapses is missing in the local cortical network and where many synapses are missing in the cortical area network. He was criticizing the model which shows constraints.
The speaker presented the architecture of the human cortex as a network of networks with at least three levels of organisation, namely the connectivity of the local microcircuit; the within-area connectivity with space constant; and the long-range connections between areas.
The brain-scale networks provide a substrate for mesoscopic measures such as local field potential and voltage sensitive dyes, and for macroscopic measures such as EEG, MEG, and fMRI resting state networks. The researchers are now connecting the microscopic models to the imaging data. The next steps consist in developing efficient wiring routines for spatially structured networks and in constructing mesoscopic measures, explained Markus Diesmann.
The researchers have tried to scale up to networks of 10 to 9 neurons. The scale-up on the K computer was guided by 3 milestones: port NEST software to K; a scale of 10 to 8 neurons; and an attempt towards brain-scale.
The scale of 10 to 8 neurons was relevant for the size of the largest area and enabled the researchers to visualize the cortex model respecting the relative sizes, Markus Diesmann told the audience, thanks to the co-development with the K computer.
The speaker explained the characteristics of brain simulations. The memory overhead increases with cores. It is the memory and not the simulation time that limits the network size. The intention is to use full memory resources for maximum-filling scaling. The analysis is based on a mathematical model of memory consumption. At different scales, different components of the software dominate the memory consumption.
Markus Diesmann showed the memory layout of the 3G and 4G kernel. The 3G memory layout accounts for sparseness in the neuronal and connection data structures. In the 4G memory layout, data structures account for heterogeneity of synaptic dynamics. For more than 10,000 cores, neurons with few local targets cause a severe overhead. A novel adaptive data structure copes with short target lists. The researchers do not want to compromise on generality, the speaker stated.
Markus Diesmann also showed how to measure the scalability. A faster element update leads to worse scaling as communication dominates the runtime already at fewer cores. Or when it is the other way round: a better scaling can be achieved by using an algorithm which performs a slower element update.
The researchers are confronted with limited memory resources. The network just fits on M cores. When the researchers want to improve the memory consumption, they have to fit a larger network on M cores. The communication only dominates at a larger number of cores but it provides better scaling. In the extreme case, the same network can be simulated faster on fewer cores, as the speaker showed.
The aim is to generate full-scale models at a cellular and synaptic resolution with maximum-filling benchmarks. One percent of the human brain can be simulated on petascale computers. Supercomputers are required to aggregate memory for synapses and to organize the interaction, Markus Diesmann stated.
In visualization, complex and massively parallel data require new visualization and analysis tools, concluded Markus Diesmann.