5 Oct 2017 Frankfurt - Each year, now already for the eighth consecutive year,Primeur Magazineasks Supercomputer experts Satoshi Matsuoka and Thomas Sterling how we are doing on the Road to Exascale. What has not changed is the prediction when we will enter the exascale era - somewhere in the early 2020s - however, there is now more clarity on what the systems will look like. And this year we saw the rise of Artificial Intelligence (AI) and machine learning to levels unseen before. But does that mean we can forget about HPC and traditional exascale computing? And what will be post-exascale? Could neuromorphic computing and quantum computing be an answer? Or are we just talking about those because we do not really have an answer for the Post-Moore era yet? Let us hear what Thomas Sterling and Satoshi Matsuoka have to say about it. We publish this interview as a series of four articles: publish this interview as a series of four articles:
Primeur magazine:Let us move on to the future and to Post-Moore.
Thomas Sterling:A paraphrase is that seven years into the future has already happened, because of the time it takes from the point of concept to the point of operation for computers. That brings up an interesting trend as well. Characteristics we do not talk about that much, are: how do we reduce that time to deployment? If we were under wartime conditions, and I hope that never happens, we would find we were doing in 18 months what we were doing in six to seven years in peace time. I take peace time any time, but there is a point. The other thing is cost. We have not discussed cost, and yet these machines are extraordinarily expensive. Hundreds of millions of dollars. That in itself limits accessibility and ultimately limits future progress of science and engineering and societal needs. We have experienced that by bringing the highest cost machines down to the cost of a laptop in our lifetime and more. Jack Dongarra loves to show where his laptop would have hit number one of the TOP500. So, it may be that a different goal than trying to hit the next triad of decades in the floating point regime, is how to have a certain threshold capability, but at one-hundredth of the cost. Early experience, back when I was much, much younger in which we produced a machine capable of delivering 20-50 times performance to cost advantage, in spite of the horribleness of the solution, demonstrated a corner-turning for the field. I believe that in the future we may finally say: "What are we doing? We are not building - I love sailing - we are not building a six-meter race yacht, we want something that people can go out and party on, and maybe - I do not like the normal reaction: then we use mobile processors, or ARM - but maybe cost is the factor for the future which will make supercomputing accessible to a much greater community than the highest end is today.
Satoshi Matsuoka:When you consider the future, obviously we are running out of the Moore's law growth, and thus post-Moore, the horizon of Moore's law is becoming increasingly clear, and as such we have seen all the past year a number of announcements of specialized architectures, FPGAs, Tensor engines, etc., which are more specialized units to better counteract such trends. We already have used up going general-purpose many-core, so we cannot do that twice but we can have specialized architectures. New memory devices, like photonics, and also new computing devices based on totally new computing paradigms like neuromorphic and quantum. The significance of these announcements is that progress has been made. It is probably more visible than ever before. Quantum computing, some people say, will happen a lot earlier than anticipated. (By the way, the D-wave machine is quantum annealing and different from quantum gate-logic computing.) Neuromorphic computing can be promising as well, and certainly there are prototypes, such as those being built inside the Human Brain Project, and also being done in other countries too. It has still yet to be proven whether they have superiority over conventional neural networks. One of the analogies people make, including me, is as follows: in terms of mimicking the human brain, deep learning networks are very much like building airplanes out of birds: they see a bird flying, they figure out what the essential components are and build a piece of mechanical engineering that actually flies. It has some of the same principles, but you optimise it in the engineering sense. Neuromorphic computing is more like trying to build an artificial bird, so it will flap its wings while airplanes do not flap. So it remains to be seen whether neuromorphic computing is much more like nature; nature discovers through evolution and that resulted in low energy - our brain is very energy efficient. However, it remains to be seen whether this will be useful to replace conventional neural networks.
Again, these new explorations are inevitable, because Moore's Law will expire at some point. So all these efforts exist: on one hand trying to extend Moore's Law and on the other revolutionizing. It is hard to say which will be better, I think it is not right to say which is better, because they are synergetic. But we will see more of these efforts in both terms as we move on to the future. Supercomputing will probably lead the way, because there is no single panacea to continue its performance evolution which is fundamental to HPC. Even quantum computers are not very general purpose, as they can expect quantum speed-up on only certain problems. And only when coupled with advances in conventional computing, quantum computing can see real benefits. The same goes for neuromorphic and other efforts. Again, I think supercomputing will lead the way as we move to the new realm of computing.
Primeur Magazine:That sounds like a closing statement. What about your last statement, Thomas?
Thomas Sterling:I think that an implication of Satoshi's comments on this is in fact that Moore's Law which is asymptotic, suggests that the head room is no longer in the device density necessarily, but rather in the actual architecture. That with Moore's Law, the end may be for von Neumann derivative architectures, but may leave space, at least for a certain gap in the opportunity, yield to alternative or innovative architectures that are not Von Neumann. Satoshi already alluded to a couple of these, even before benefiting significantly from Quantum computing, which I would say is probably about to happen, but as Satoshi said, it is going to be a special purpose machine, perhaps merged with conventional processing. My point is to summarize, Non-von Neumann derivative architectures are the space in which we may be able to extend delivered performance on real world applications one to two orders of magnitude. It will not give you the exponential growth that we have enjoyed for the last thirty years, but it will make a quantum step.
Satoshi Matsuoka:In that respect, what does it mean to exascale? For these new computing models, these double precision flop/s may not mean anything, so as we move on, I think, again the focus is on exascale, on one hand, we can now achieve it, on the verge of achieving it, there is high focus, but in the long scheme of things it may not have significance as compared to this revolutionary development towards post-Moore. This is something history will probably judge. So again, this is a very vibrant field.
So, see you next year in Frankfurt for what will going to happen, whether any of our crystal ball speculations are correct, or whether it is time to retire.
Thomas Sterling:But in likelihood that they will be both wrong and right at the same time. See you next year in Frankfurt.
Primeur Magazine:Thanks for this interview. See you next year.