Back to Table of contents

Primeur weekly 2019-02-11

Focus

2018 - Another year on the Road to Exascale - Part II - Memory, interconnects and other technologies for exascale ...

German Research Data Infrastructure GeRDI offers service to manage data according to FAIR principles ...

Do not get lost in the Horizon 2020 labyrinth, use the Knowledge Base ...

Exascale supercomputing

Exascale computing is coming soon, but will scientific communities be ready to make use of it? ...

Quantum computing

Atos delivers one of the most powerful quantum simulators in the world to Hartree Centre in the UK ...

Argonne researchers develop new method to reduce quantum noise ...

Focus on Europe

State concept to high-performance computing: German universities are intensifying their cooperation ...

Barcelona Supercomputing Center coordinates an international project to share and reuse cancer genomic data at a global level ...

Middleware

ThinkParQ announces its Platinum Partnership with Pacific Teck ...

Software stack in a snapshot ...

Kalray launches ES3CAP as lead partner: a 22,2 million euro budget ambitious industrial project for the development of the future computing platform for intelligent systems ...

Hardware

DDN names RAID Inc. a preferred Lustre reseller ...

Tremend signs agreement with Mellanox ...

WekaIO places first on the Virtual Institute's IO-500 10 Node Challenge ...

Applications

Barcelona Supercomputing Center's ground-breaking collaboration with Global Parametrics supercharges its predictive climate-risk modelling ...

New geometric model improves predictions of fluid flow in rock ...

Modelling uncertain terrain with supercomputers ...

Supercomputing propels jet atomization research for industrial processes ...

NOAA and NCAR partner on new, state-of-the-art U.S. modelling framework ...

The Cloud

Newly launched ANSYS Cloud accelerates engineering productivity and business agility ...

OCRE enables easy Cloud usage through the European Open Science Cloud ...

2018 - Another year on the Road to Exascale - Part II - Memory, interconnects and other technologies for exascale


11 Feb 2019 Frankfurt - Each yearPrimeur Magazinesits together with Thomas Sterling and Satoshi Matsuoka to discuss the state of exascale computing in the world, and perhaps do some predictions for the upcoming year. We recap the current year but do not look back, only forward. How close are we to exascale? When will it be reached? The bumpy road seemed a bit less bumpy this year. Satoshi Matsuoka is now at Riken, overlooking the Post-K supercomputer developments in Japan. Thomas Sterling pointed to the Summit that could be a prototype architecture, a blueprint for a real exascale system. So is the finish line in sight? Let us listen to the experts.

Part II - Memory, interconnects and other technologies for exascale

Primeur Magazine:Is there something to say about other technologies? Like memory, interconnect?

Thomas Sterling:Memory bandwidth has been taken seriously in a way never before seen. Satoshi mentioned it: It is a major factor in the Post-K and certainly a major part of Summit with the Power9 from IBM and NVLink as part of it. Work is going on at Intel with burst buffers and other techniques also including lots of the emergence in the area of memory bandwidth. I think it is not a surprise, but it is important to see this taking place.

Satoshi Matsuoka:Fully correct. For some time there has been political emphasis on the TOP500 and people just pushing double precision Flop/s to get to number 1. This has enforced machines with very, very low memory, network and I/O bandwidth. And everybody is guilty. But now, as Thomas said, because of these new emerging technologies, like new memory technologies, photonics, and new novel memory devices, there has been significant attention being paid to increase system bandwidth. This has led to various work to again seriously look at the effect of the bandwidth on the system. For example, it is the finding that most of the HPC applications are not constrained by Flop/s but instead, most are constrained by bandwidth, be it memory or network, or I/O. Therefore there is a renewed emphasis on the bandwidth of the system to make a balanced system and bandwidth thoughput. This is a very good trend and hopefully this will continue: not just pushing Flop/s. In some sense, for the exascale machines, we think, exaflop/s is seen as less of a concern. You can build some machines with exaflop/s with no bandwidth at all. But, at least, the ones that I have seen in the US and the one I see from Japan have significant emphasis on being very balanced machines.

Primeur Magazine:Shall we continue to the other technologies? Like ...

Satoshi Matsuoka:Quantum?

Primeur Magazine:Quantum, Neuromorphic, etc. Very distant technologies.

Satoshi Matsuoka:Maybe not so distant. These alternative computing models are also receiving significant attention. Just to give some examples, let us say Quantum, Neuromorphic, and standard deep learning neural networks. There has been progress made and lots of different types have emerged as well. But it is true that people are building hardware, real chips, there are a tremendous number of deep learning start-ups. There are a sizable number of neuromorphic chips, and of course quantum, which is still at a very early stage, especially for quantum gate logic, which could be really useful. As for quantum annealing, it remains to be seen if that is useful. At least quantum gate logic will be very useful, but we only have a very few bits. There has been significant attention paid to these models, and that has been very evident. Whether they can solve some of the problems we are accumulating to the end of Moore's Law remains to be seen. In fact, the downside of the new computing models is that they are only applicable to a very narrow set of problems. For example, neuromorphic or neural networks are applicable to optimization problems, or AI problems. As for quantum logic, gate logic, if they fit, if they can get exponential speed-up, then that is good. But there is a very limited set of algorithms where you can get quantum speed-ups. For example, you cannot solve Partial Differential Equations (PDEs) and get quantum speed-up using quantum computers. At least, we do not know the algorithms. It is very promising, it is very exciting. However, it should not derail from the fact we also have to accelerate conventional computing.

Thomas Sterling:Quantum computing, the fundamental paradigm and the semantics, seem to me abstract. I would say there are three issues: One is the fundamental paradigm in which we can have confidence. The second issue is there is an enormous investment and where there are investments, that simply provides reinforcement for people more likely for more discovery to happen. The third thing is, however, the technology is not easy and it is still possible that in the end quantum computing will not prove viable, because the conditions of errors and stability, cohesion, and so forth simply prove untenable. I used to believe that this would be the dominant result. Now, I am cautiously optimistic. I do not think the four or five year prediction is valid but it depends on what you are predicting. I believe in four or five years tremendous progress will be made, but maybe not to the point where we will have a usable machine.

Satoshi Matsuoka:These are very long-term research questions. So continuous investments should be made, but we should not expect immediate results. A lot of the technology is still in basic research phase. And the hype leads to believe it will solve all the world's problems. But even if you talk to people in quantum computing, they are very realistic. In fact, they are very concerned about the hype. Because the hype curve could deflate. The same is true, for instance, in AI, which has gone through several hype cycles. It has now resurrected for the third or fourth time. Now it seems more credible, but it has taken 50 years to do so. We have to be very cautious that the hype does not overwhelm the real results and that the deflation of the hype will not lead to the deflation of the field itself. Because Quantum computing, Neuromorphic, they are very important fields. There should be continuous investments to investigate.

Thomas Sterling:I ran into Bo Ewald, the other day, from D-Wave, well-known for his many activities over the years. I asked him: 'What if you gave an elevator speech, nothing more'. He said: 'A relatively rapid increase in known applications'. This was what he thinks was the most significant advance from the last year progress in quantum computing.

Primeur Magazine:The next point is that, when I understand correctly, what you said is perhaps we do not need another technology, perhaps we do need another architecture. Can you elaborate a bit on that?

Thomas Sterling:I have certainly, mostly to be proactive, finished my presentation where I conveyed what I claimed was a plausible road map not to exaflop/s, but to zettaflop/s and then to Yottaflop/s. I am not pushing that but thought it was intellectually interesting. It was mathematically correct, but there are certainly millions of other problems with that. But I do think we are facing the need of a paradigm shift. However, I do not think we need a completely new machine design to satisfy that. I think it will probably come out of the programming model first and that evolution. And then eventually there will be optimizations included in the architecture. By doing that form of incrementalism, it will be viable, commercially viable, and financially viable. So I guess the answer to your question is: No, I do not think it is an appropriate time to talk about radical departures until the community in a responsible and professional way explores what clearly can produce important orders of magnitude in delivered performance on real world problems.

Satoshi Matsuoka:In fact, as the device lithography caters off, I have a firm belief that is the time for the architectures to come in to basically attain these continuous speed-ups. The other way of saying this is, you know, the architecture research field stagnated because people were having the mentality of saying: instead of buying into new architectures, I could wait for two years, and I get speed-up because of Moore's Law. Now people do not have that luxury, so they have to be smarter to go for architectural disciplines that use less energy, less overhead, more specialized, and so forth, in order to attain the continuous speed-up. And this is very broad, it is not speeding up your single processor, you may have to design new devices and get a new computing model, like quantum, neuromorphic, or whatever. Even in conventional computing you may need to buy in very different types of architectural disciplines: Non-Von Neumann type of execution models. So, overall, it is a very exciting time, because now architecture matters, because we see it becomes increasingly hard to obtain this automatic increase by Moore's law. People have to be much smarter, design better architectures, better algorithms.

This is part II of the interview:

The interview was conducted during ISC 2018 in Frankfurt.

Ad Emmen

Back to Table of contents

Primeur weekly 2019-02-11

Focus

2018 - Another year on the Road to Exascale - Part II - Memory, interconnects and other technologies for exascale ...

German Research Data Infrastructure GeRDI offers service to manage data according to FAIR principles ...

Do not get lost in the Horizon 2020 labyrinth, use the Knowledge Base ...

Exascale supercomputing

Exascale computing is coming soon, but will scientific communities be ready to make use of it? ...

Quantum computing

Atos delivers one of the most powerful quantum simulators in the world to Hartree Centre in the UK ...

Argonne researchers develop new method to reduce quantum noise ...

Focus on Europe

State concept to high-performance computing: German universities are intensifying their cooperation ...

Barcelona Supercomputing Center coordinates an international project to share and reuse cancer genomic data at a global level ...

Middleware

ThinkParQ announces its Platinum Partnership with Pacific Teck ...

Software stack in a snapshot ...

Kalray launches ES3CAP as lead partner: a 22,2 million euro budget ambitious industrial project for the development of the future computing platform for intelligent systems ...

Hardware

DDN names RAID Inc. a preferred Lustre reseller ...

Tremend signs agreement with Mellanox ...

WekaIO places first on the Virtual Institute's IO-500 10 Node Challenge ...

Applications

Barcelona Supercomputing Center's ground-breaking collaboration with Global Parametrics supercharges its predictive climate-risk modelling ...

New geometric model improves predictions of fluid flow in rock ...

Modelling uncertain terrain with supercomputers ...

Supercomputing propels jet atomization research for industrial processes ...

NOAA and NCAR partner on new, state-of-the-art U.S. modelling framework ...

The Cloud

Newly launched ANSYS Cloud accelerates engineering productivity and business agility ...

OCRE enables easy Cloud usage through the European Open Science Cloud ...