Katrin Amunts started by explaining that neuroscience and brain research are very closely related to each other for a long time now, already since Leonardo da Vinci who pictured the human brain and drafted a calculating machine too. The first version of the Perceptron which was one of the first artificial neuronal networks dates from the fifties. Recently, we had a very high-resolution reconstruction of the human brain on the one hand and on the other hand, we have large European initiatives driving the development of new computing technologies like for instance the DEEP computers.
The Human Brain flagship project aims at bringing this all together for a better understanding of the human brain.
Katrin Amunts sees the human brain as one of the most challenging research topics, not only of the 21st century. She thinks it is important to understand the human brain because it helps us to understand what makes us human. But it is very practical from an economic point of view as well, as it helps to drive the development of new technologies and to develop better therapies for patients.
To understand how all the different levels of the brain, down to the Angstrom level, are organised and work together, is a real challenge that needs a very large project like the Human Brain Flagship.
Jeannette Hellgren Kotaleski from KTH Sweden, continued the session by trying to exemplify what it would mean to simulate the brain. She said that brain damages and brain diseases account for a third of the health care cost in the Western world and therefore we need to understand the brain.
The brain is an electro-chemical organ which has 100 billion neurons and every neuron connects to other several thousands of other neurons. Still our brain runs on very little energy; it just needs 30W, approximately the same as what a traditional light bulb needs.
Data in the field of brain research is fragmented over a lot of different disciplines. The real challenge is to integrate these and create knowledge and understanding about the brain from the data. One way to do that is to use modelling and simulations. The brain has many dynamic, complex and interactive processes in parallel. At some point we need to simulate it on on various scales. No single lab can simulate the brain on its own so it has to be really a collaborative effort. This means we have to be able to collaborate, we have to agree on how to model and simulate the brain and on how you validate the model results.
If we want to simulate the brain at the cellular level it means that we have to represent all this 100 billion neurons and their connections. But we are not able to measure those things. A very important part of building models of the brain is to find the principles and the algorithms that can actually live with sparse data and predict what is missing.
If we use a data-driven approach which we might want to simulate receptor induced cascades to predict the working of synaptic target molecules which can lead to predict how network activity leads to synaptic learning. In a hypothesis driven approach we might instead pick our favorite phenomenological plasticity rule and apply it in simulations to see if one can predict, for instance, receptor field formations.
At KTH in Stockholm there are several experimental groups and modelling groups that are interested in the basal ganglia and the basal ganglia for a brain structure.
The basal ganglia is involved in decision making action selection and also the biological substrate for reward learning and it is of interest for many neurological and neuropsychiatric disorders.
Next Jeannette Hellgren Kotaleski reminded the audience we can indeed almost simulate the brain. We are close. It can be done with exascale computing. If we have exascale computing, we could actually model the human brain at the cellular level. Of course, we do not have all the data that is needed for that yet. But there are a lot of initiatives around the world which are producing real data at the cellular level of the whole brain.
How much will we eventually understand of the human brain? Jeannette Hellgren Kotaleski said this is a key question. When you model the brain at the cellular level we can understand things that relate to electrical activity. But to really understand the brain we have also to add on the sub cellular level, we need to understand how the brain adjusts to the inputs. She cited Emerson Pugh: "If the human brain were so simple that we could understand it we would be so simple that we couldn't". But for sure we already understand a lot. We are really progressing fast.
Next Viktor Jirsa from Marseille University, explained how to get from HPC Brain Models to Clinical Applications. He gave an impression of how we use high performance computing to implement a workflow that takes us from individualized patient images, personalized medicine, through mathematics all the way back to the patient bed in clinical applications. He did this with the example of epilepsy. Epilepsy is a key problem: 1% of the human population is touched by epilepsy. One-third of these patients are actually pharmaco-resistant which means for them the only hope that remains are invasive interventions such as receptive surgery. Hence, Viktor Jirsa's focus on drug-resistant epilepsy.
Epilepsy expresses itself as fast oscillatory discharges that we can measure with so-called stereotactic EEG in which we insert electrodes. We generate CT images that are about 10 centimeters long and they have 10 to 15 points on it that are spaced about 2-3 millimeters apart which allows us to measure the neural electric activity in the brain.
Already in the beginning the question arises: where do we have to place them? In a three-dimensional representation of a patient's brain we reconstruct the electrodes and we are able to measure each point for electric activity. The key question for the surgeon is: what is the epileptic zone; what is the tissue that is epileptic? This is a target for surgery where the tissue will be extracted.
What we want to do in the Human Brain Project is to establish a work that gets us all the way from the patient images directly into the surgery room. This workflow starts with the neural imaging of the patient to construct a structural, geometric framework, an avatar. Once we have the avatar, we need to equip it with mathematical equations which are modelling the localized activity within an individual brain region. This allows us to create a functional brain. It still is a generic brain, even though we have some personalization coming directly from these images but we need to refine the network pathology of the patient. We do this by what we call personalizing the brain with patient-specific information using the latest machine learning techniques that are running on our HPC clusters.
We can use this personalized brain as an independent autonomous in silico platform representing the brain to explore different intervention possibilities. All the patient properties express themselves within the mathematical parameters so we can change the parameters that mimic interventions on the brain model and we can trace out parameter spaces. We perform parameter sweeps of our computational structures in order to find the best ways to intervene. Then, we advise a surgeon on the activities to be performed in the clinical routine.
Personalization occurs here through the structural images and through the machine learning that is fingerprinting the patient.
For example, using MRI we can extract cortical and subcortical surfaces of the brain. We can decompose it into individual brain areas. We can also extract, using so-called diffusion tensor weighted imaging, the connectivity. This allows us to put everything together into the avatar.
We do not have the computational capacities to run complete models. So we need to use some mathematical tricks that we borrow from statistical physics and non-linear dynamics. These reduced representations work pretty well. That allows us then to put the patient-specific structure together with mathematics and develop a self-contained autonomous in silico platform representing a patient-specific brain model.
This is a concept, it is a framework. The concepts are validated in animal models. We can reconstruct the real connectivity. The advantage of the animal studies is that we can also go into the brain of the mouse and extract the real existing fibers.
In retrospective studies, we have now used this avatar modelling for 15 patients. We can now make statements about the patient specificity about our predictions. This is a first proof-of-concept for personalized medicine taking this type of HPC model based approaches.
Viktor Jirsa said researchers will run the world's first clinical trial using autonomous brain network modelling starting on the 1st of January in 2019. Eleven hospitals are participating, that will perform in this clinical trial 400 prospective patients which means they will virtualize the patients. We will provide a clinical report estimation of the epileptic annex and we will feed it back to the clinicians. They will use it as an additional tool in the decision-making and it will go directly into the surgery. Half of the patients will benefit from the virtual brain prediction and the other half of the patients will not benefit from that because we need correct statistical validation in order to clearly demonstrate the predictive value.
Dassault is partner in this project and will develop during the five years a first prototype of the interior workflow and the modelling framework in order to explore the possibilities and translate them into the applications.