For the fifth year we interviewed Satoshi Matsuoka and Thomas Sterling on how far we are on the Road to Exascale. Five years ago we thought we would be half way by now. Even last year there was a lot of optimism. Today, more and more it seems likely the date when we will reach exascale will slip by a number of years.
Exascale computing is not only about processing, it needs big data, really big data. To get all the data in and out of the system requires new ways of memory are being developed. What will be the winner is not yet clear, but interesting trends are emerging.
The dominant computer architecture paradigm with a central processing unit where data moves in and out, is not very suitable for exascale or knowledge processing. Hence new architectures with new associated run time systems are needed.
There is also excitement about low energy HPC using for instance ARM processors. Concentrating on low energy alone is missing the important point of ARM: that it is an open extensible architecture. The ultimate use of exascale systems is for knowledge processing, what could lead to a real artificial intelligence. Would such an intelligence pass the Turing test? Of course not, it would laugh at something as human as a Turing test.
These are the main topics discussed by Satoshi Matsuoka and Thomas Sterling in this interview, conducted at the ISC14 conference.