Back to Table of contents

Primeur weekly 2016-07-25

Special

SpiNNaker and BrainScaleS neuromorphic systems ready for non-expert use in stochastic inference computing ...

Beyond Moore's Law panelists address business maturity of new technologies, future developments, neuromorphic chip training, and memristors ...

Focus

D-Wave works with its customers Lockheed, Google and Los Alamos to design better quantum software ...

Altair's Bill Nitzberg to present PBS Pro open source license version, PBS Simulator and PBS Cloud Manager ...

Quantum computing

Russian physicists discover a new approach for building quantum computers ...

RMIT researchers make leap in measuring quantum states ...

Focus on Europe

Guiding EU researchers along the last mile to Open Digital Science ...

Digital Humanities and Urban Climate proposals win NLeSC-Lorentz Workshop competition ...

European Horizon 2020 Work Programme update supports competitiveness through open science ...

6th Irish Supercomputer List shows Irish HPC capacity doubles with new no. 1 and three new TOP500-class machines ...

PRACE to look for Peer-Review Officer ...

7th International HPC Summer School took place in Ljubljana, Slovenia ...

Middleware

Inria joins the OpenMP ARB ...

Hardware

Gigabyte announces official release of production-ready Cavium ThunderX-based servers ...

The University of Tokyo selects Mellanox EDR InfiniBand to accelerate its newest supercomputer ...

Smallest hard disk to date writes information atom by atom ...

A mini-antenna for the data processing of tomorrow ...

Hangzhou C-SKY Microsystems joins EEMBC Executive Board ...

Electron spin control: Levitated nanodiamond is research gem ...

The Scripps Research Institute leverages powerful end-to-end DDN storage to help reveal secrets to new medical treatments ...

Applications

MSC Software partners with Italian Campania Region Technological Aerospace District for the development of aeronautical programmes ...

Deloitte Advisory Cyber Risk Services and Cray offer advanced Cyber Reconnaissance and Analytics services ...

Strathclyde mathematician wins prize for research into speeding up stroke diagnosis ...

Underlying molecular networks suggest new targets to combat brain cancer ...

An accelerated pipeline to open materials research ...

Rice wins interdisciplinary Big Data grant ...

Study uses text-mining to improve market intelligence on startups ...

The Cloud

The HNSciCloud Pre-Commercial Procurement tender is out: you can bid now ...

CSC and IBM expand strategic alliance with collaboration utilizing IBM Cloud for z to enable clients' move to Cloud ...

Beyond Moore's Law panelists address business maturity of new technologies, future developments, neuromorphic chip training, and memristors

22 Jun 2016 Frankfurt - The ISC 2016 session titled "Scaling Beyond the End of Moore's Law" ended with a panel discussion featuring Josh Fryman from Intel, Damian Steiger from ETH Zurich and Karlheinz Meier from the Ruprecht-Karls University of Heidelberg. The three session speakers answered questions coming from the audience.

The first question was a business question. At this point, we have possibly reached saturation in the PC market. Without a business case, how can Intel go forward and make the 10 billion investments in the future that are needed to push the current limits in CMOS. It is a costly proposition and Josh Fryman didn't talk of a business model. For Damian Steiger and Karlheinz Meier there was a related question. How do you see the technology that you have developed in the lab move forward and become computer business, if at all?

Josh Fryman:Let me give you a broad view of something that is currently going on to repeat a pattern. Computers first came out in centralized machines, lots of edge processing using pencil and paper with punch cards and the machine. PCs came out and decentralizing to have this growth in the industry outside. Data centres have started to bring it back in. We have got cell phones but cell phones are actually starting to look like dumb terminals to the real mainframes. We have this big tower in the middle. We are moving into another pattern now which is the disaggregation of the data centre because we are going to the fully connected model. There is no system you can design where you can get the bandwidth and information doing the data centre processing. You see more and more compute pushing up to the edges which will look different. This is where machine learning habits come in. How do you do signal attenuation, selective attention on the edges, this attenuated small signal information like the data centre aggregators for higher level processing where we can mark our decisions? The question is really good. Instead of the research engineer saying: "That is not exactly my focus", I would say that there is this cyclic pattern where you are spending your resources. I don't see the industry as being stagnant, I see this as a shift to a very different design point.

Damian Steiger:Let me answer the question for quantum computing. Most of the things are in apps and that needs to change. The university life is typically built up like this: there is a professor with graduate students and every now and then a graduated student is going away. That is probably not the way to have fast research and development of large scale quantum computers. That is why we are trying to show killer applications for specific cases and quantifying the economic benefit, like the fertilizer production for instance, that I showed. That is something where you can afterwards define the scale you need and how much money you can make out of it. If we have a few more of these killer applications, I really believe that business is willing to invest in development. Probably it does not take 10 billion. I guess 1 billion would be sufficient to build a small quantum computer. If you have a portfolio of 10 algorithms you want to run by that time, which can give you economic benefit, then we're good. Currently, part of the research is obviously finding new algorithms because now we only have a handful. We are focusing as a research community to find specific applications.

Karlheinz Meier:For neuromorphic computing, how do we move from the lab to the computer industry or applications that are really worthwhile to be implemented on a larger scale? Clearly, the machines we build at the moment are research devices. They have one important feature: they are really extremely configurable. You can read out their activity of the network because you want to understand what is going on. These are devices which are built to understand. Now, our idea of a long term development by coming up with products that are more compact to really exploit the energy advantage which is definitely there, is to give up configurability. If you have a neuromorphic chip, it just has the task to detect certain spatial and temporal patterns. As a user, you don't want to look into your cell phone, you don't want to read the energy potential and look at all the spikes it is producing or the correlations, you just want it to work. This can be trained. The idea is that you use machines like that, to solve interesting problems. You make a special dedicated chip that is optimized to solve a particular problem of configurability. In a way you can see the machines as huge neural FPGAs having a lot of configurability. You can implement any network architecture in your cell phone application but you don't want that. You want a dedicated thing that is mass-produced and does one thing that is your goal.

John Shalf also had a question for the panel. The Exascale Programme in the US aims for a machine by 2023 and the applications to deliver science by 2025. This is the approximate end of the roadmap for silicon lithography. Given that we have eight years for your respective technologies which are at various levels of maturity, where will you be by 2020-2025?

Josh Fryman:Any major company is looking at a lot of different techniques. Quantum computing is a heavy one. Neuromorphic computing is really interesting because you could build your own circuits on top of the CMOS substrate. You can simulate and do analysis. It doesn't work very well in terms of time and energy but I can do quantum simulations and neuromorphic simulations on classic architectures. All these major companies are looking at Intel. Intel is certainly doing the same, using different techniques to go forward. Turning eventually the knob on transistors as we have been doing it, when that is, is highly debatable. It is actually being looked into. In the grand scheme of things, where we all will be in ten years, we will hopefully be standing on the crest of something really interesting. I don't know what that is today because we have this horizon in maturity. Whatever the next technology is, it does take 10 to 15 years engineering-wise to get it to mass-scale application. We don't know what that is yet.

Damian Steiger:Where quantum computing will be in the next 10 years, really depends on the next few years. I have shown the different types of qubits we have. If the Microsoft approach works to find the majorana qubit that will be like probably four or five orders of magnitude less error susceptible than what we have currently. That will be a huge leap forward. It that qubit technology doesn't work, we have to resign to what we have. There are things we haven't yet figured out how to do. For example, when you start scaling your qubits, you have to control that. Superconducting qubits are controlled by microwave pulses. You cannot just lie your millions of lines into a fridge and try to control them. This needs to be developed on the side of superconducting classical logic which can control your quantum computer. That is a very difficult task and I'm frankly not sure it will go. Within the next 10 years, we should have a machine which is better. Quantum supremacy will be faster. At least, you can do a calculation which you cannot do on a supercomputer. The question is whether in the next few years we will be reaching this stage already, except that the calculation will be useless because it doesn't give us any benefits. But maybe in 10 years, we can already solve small quantum mechanical systems where we actually care about the solution.

Karlheinz Meier:In neuromorphic computing I am quite optimistic because you really see that the industry, such as Google, is already doing neuromorphic computing and running experiments. They work extremely well, like for example Google Go. When you see the time scale in which the system reacts, it takes a couple of minutes to react but it takes a year to train the system on a big cluster, consuming 37 kilowatts of power. Once the systems are configured, they are ready for application. The problem is that it takes forever to train the system. It costs a huge amount of time and energy which is not very practical. Here, I see that in spike-based accelerated systems, spikes are not only contributing to energy efficiency but also to learning speed. The accelerating has to be done locally on the system. Currently, we have all kinds of loop experiments where we have one system doing the training and a system that does the evaluation of the result but we have to implement local learning on the substrate. Once you can accelerate learning. there will be a breakthrough for this technology and this may change the way we do computing fundamentally. I am not sure whether the acceleration of learning can be done on a 10-year scale. This requires ideas and intellectual input.

The following question also addressed learning. You put a neuron on a chip, train it off-line to solve a specific task. However, learning itself is very crucial, especially for devices like sensors, to keep the system running and also be adaptable to problems. Don't you think that learning is also crucial for future applications that we don't carry as a fantasy?

Karlheinz Meier:It would be wonderful to have the learning capability in the system that we actually use, so that the system is adapting to the changing environment. People are asking for a short-term straightforward application where you pre-train it and use the network architecture to use the application on the cell phone or something like that. Once we know how to implement local learning on chip, there will be a whole range of new applications that will be coming up. The car engine, for example, changes all the time. It ages, performance changes and you always want to optimize the efficiency of the engine. You can use neuromorphic computing for that but it is a little bit more difficult than the examples I showed earlier on. It would be really nice to have these local learning capabilities on chip on the system which is difficult. But it can be done because there are no technology barriers. Quantum computing will definitely grow as a huge technology. You don't have to understand about qubits nor transistors. You just have to be intellectually clever to implement it. That is all what is needed. You need human brain power, and not so much new technology.

Josh Fryman:You have to change the oil of every 3000 neuromorphic transistors too. You are using the car analogy. You have to change the oil of the neuromorphic system. Brain has fluids like everything else. There are decays. Things burn out. They have to regrow. Some of the things in the quantum space are burning out these devices. If you are looking at the long term and you build a neuromorphic chip, you will have to deal with the fact that these things are going to burn out functions.

Karlheinz Meier:The burning out is one thing. The variability is important. It is an interesting topic. There are variabilities that you print on the system in the production process and that is kind of static. It is fixed pattern noise on the system. You can learn how to handle it. And there is temperature room, of course, which is a nuisance for some applications. For stochastic computing, it may be useful. Then there is of course burning up. You can also have elements that just stop to work. What do you do? What we have shown is that you can actually live with a certain degree of burning out, what people call graceful degradation. You have for example this classification capability in the insect circuits I have shown. If you have done experiments by just killing circuits randomly in the system and the performance goes down but it doesn't drop to zero. You can live with it for a while. At some point, it starts to become unbearable, if you lose 50% or so, but clearly, there is the resilience and there is graceful degradation which is an important feature.

Somebody in the audience asked: So far, with learning on neuromorphic chips, we had a technology barrier, that was actually how to change the rates. Now we have the memristor discussion. How do you see that there is still an barrier in learning?

Karlheinz Meier:That is a very good question. Let me comment on that. How to change the rates? We do that all the time. I can show the supervised table where we change the rates by adding hardware in the loop. When we have a conventional computer that measures the success and changes the rates accordingly to some learning algorithm like back propagation. That is the most boring thing but it works very well. Then, you can change the rates by unsupervised local things like this spectral-dependent plasticity which people see in biology and which you could see in the bird example where it is brought into use. But if you talk about local learning it means that the intelligence of the learning algorithm can not be off-chip. It has to be on-chip. In a paper that was recently published, we proposed and implemented what I call a plasticity processor. We have on the neuromorphic chip with neurons and synapses a little power processor which you can use to implement different learning algorithms in a flexible way, like for example changing synapses but also changing neuro-parameters and changing the connectivity on the fly.

As far as the memristors are concerned, they are praised very much for neuromorphic computing. I am a bit reluctant there, because if you look at the studies that people have done with memristors, they are all the same. You produce memristors, which are really cool devices, and you characterize the circuit and you take that computer model to put it on a large network, people have so far totally ignored the aspect of variability, which in the case of memristors is much bigger than for CMOS. I am sure there are ways to handle that but at the moment, to my knowledge, these things are not done. I don't see how you can calibrate, for example, a memristor in a large circuit. You can calibrate a synapse or a neuron because there are parameters. If it doesn't work very well, you can fix it by using a different calibration parameter. How you do that in a memristor? You cannot change them. They are coming from the factory and they have this huge variability. How do you handle that? Variability is not 'no good' per se. It can be good but you have to put them in the right place through a learning and self-organisation process. I cannot see that people have even thought about this for memristors.

John Shalf thanked the distinguished panelists for their very compelling talks.

The workshop is covered in full in five articles:

  1. Moore's Law is all about economics but there are alternative technologies on the way
  2. CMOS is still here to stay but we need to think out of the box to reclaim efficiency
  3. Why the hunt for killer applications to run on quantum computers is challenging
  4. SpiNNaker and BrainScaleS neuromorphic systems ready for non-expert use in stochastic inference computing
  5. Beyond Moore's Law panelists address business maturity of new technologies, future developments, neuromorphic chip training, and memristors

Leslie Versweyveld

Back to Table of contents

Primeur weekly 2016-07-25

Special

SpiNNaker and BrainScaleS neuromorphic systems ready for non-expert use in stochastic inference computing ...

Beyond Moore's Law panelists address business maturity of new technologies, future developments, neuromorphic chip training, and memristors ...

Focus

D-Wave works with its customers Lockheed, Google and Los Alamos to design better quantum software ...

Altair's Bill Nitzberg to present PBS Pro open source license version, PBS Simulator and PBS Cloud Manager ...

Quantum computing

Russian physicists discover a new approach for building quantum computers ...

RMIT researchers make leap in measuring quantum states ...

Focus on Europe

Guiding EU researchers along the last mile to Open Digital Science ...

Digital Humanities and Urban Climate proposals win NLeSC-Lorentz Workshop competition ...

European Horizon 2020 Work Programme update supports competitiveness through open science ...

6th Irish Supercomputer List shows Irish HPC capacity doubles with new no. 1 and three new TOP500-class machines ...

PRACE to look for Peer-Review Officer ...

7th International HPC Summer School took place in Ljubljana, Slovenia ...

Middleware

Inria joins the OpenMP ARB ...

Hardware

Gigabyte announces official release of production-ready Cavium ThunderX-based servers ...

The University of Tokyo selects Mellanox EDR InfiniBand to accelerate its newest supercomputer ...

Smallest hard disk to date writes information atom by atom ...

A mini-antenna for the data processing of tomorrow ...

Hangzhou C-SKY Microsystems joins EEMBC Executive Board ...

Electron spin control: Levitated nanodiamond is research gem ...

The Scripps Research Institute leverages powerful end-to-end DDN storage to help reveal secrets to new medical treatments ...

Applications

MSC Software partners with Italian Campania Region Technological Aerospace District for the development of aeronautical programmes ...

Deloitte Advisory Cyber Risk Services and Cray offer advanced Cyber Reconnaissance and Analytics services ...

Strathclyde mathematician wins prize for research into speeding up stroke diagnosis ...

Underlying molecular networks suggest new targets to combat brain cancer ...

An accelerated pipeline to open materials research ...

Rice wins interdisciplinary Big Data grant ...

Study uses text-mining to improve market intelligence on startups ...

The Cloud

The HNSciCloud Pre-Commercial Procurement tender is out: you can bid now ...

CSC and IBM expand strategic alliance with collaboration utilizing IBM Cloud for z to enable clients' move to Cloud ...