To solve this challenge, U.S. and European scientists joined forces 25 years ago to create a common language to allow highly parallelized and diverse computer processors to communicate - the Message Passing Interface (MPI).
Many of the founding developers of MPI reminisced about the birth of their brainchild during a one-day symposium celebrating its 25th anniversary. The symposium was held in conjunction with the EuroMPI/USA 2017 conference, at which multiple papers on current MPI-related research were presented. Held at the U.S. Department of Energy's Argonne National Laboratory, this marked the first time that the long-running conference convened in the U.S.
Attendees of both the symposium and the conference represented industry, academia and research facilities from the U.S., Europe, Japan and South Korea.
Among the founding developers in attendance was Argonne Distinguished Fellow Emeritus, Ewing "Rusty" Lusk, who opened the symposium by presenting a key piece of technology that helped lay out the original MPI standard - an overhead projector.
Ewing Lusk was a computer scientist in Argonne's Mathematics and Computer Science (MCS) division when it entered into parallel computing in the early 1980s, an era in which there were numerous vendors of parallel computers, each with its own unique programming language.
"Vendors competed to have the most easy-to-use language", stated Ewing Lusk. "But if you wrote a programme for an Alliant machine, for example, you had to completely change it to run on an Encore machine, even though they were architecturally similar."
By the early 1990s, the parallel computing community realized there were too many competing mechanisms for message passing. In April 1992, many of the key players participated in a workshop to investigate standards for a message-passing program that could run on all parallel machines.
"At the end of the workshop, it was clear that there was a need, a willingness and a strong desire to have a standard", stated Jack Dongarra, a professor in the Electrical Engineering and Computer Science Department at the University of Tennessee, Knoxville, who helped organize this first of many meetings. "That was the beginning of MPI."
From the outset, everyone agreed that collaboration between U.S. and European researchers was essential. There was some concern by U.S. researchers that the Europeans were developing their own standard, although U.K. and German views were too varied to come to agreement, recalled Rolf Hempel, head of the German Aerospace Center's (DLR) Simulation and Software Technology lab.
"By embracing both earlier U.S. developments and the European ones, MPI was much more easily accepted as a universal standard", he stated.
The MPI Forum eventually was launched in January 1993. Members, comprising more than 60 people from 40 organisations, worked for a year and a half to draft the first MPI standard, published in May 1994.
Since then, the MPI Forum has remained active, continuously working to ensure the standard meets new computational requirements. Almost at MPI version 4, they are now preparing for the next major computing frontier - exascale.
"Early on, some thought that wed need to evolve beyond MPI to move to exascale. This is probably not the case", stated Ewing Lusk. "MPI has lasted because we did a good job defining it. That's why it's in use now and will remain in use for a long time to come."
Another reason that MPI has lasted 25 years, added Ewing Lusk, is that it has always been a vehicle for computer science research. The Argonne group alone has published more than 100 peer-reviewed papers on MPI-related topics over that period.
Although involved in MPI for nearly 17 years, Pavan Balaji, general chair of the EuroMPI Conference 2017, is among the newer faces of MPI and the MPI Forum. A computer scientist and group leader in the MCS division, he became chair of the MPI hybrid programming working group in 2008, helping to drive new proposals for changes to the standards.
Like Ewing Lusk and many of the other conference participants, Pavan Balaji appreciates the robustness of MPI and its ability to adapt - despite the rapidity of computer advances - and outpace newer message-passing models.
"You can look at these new programme models like tiny flowers that offer some new features", he stated. "MPI is like a superflower that merges all of these new capabilities and standardizes them into a new version of MPI. In some sense, the new programming models are still succeeding, it's just that they'll be called MPI in the future."