16 Jun 2017 Blacksburg - A team of researchers in the Department of Computer Science in Virginia Tech's College of Engineering discovered a key to what could keep supercomputing on the road to the ever-faster processing times needed to achieve exascale computing - and what policymakers say is necessary to keep the United States competitive in industries from everything to cybersecurity to ecommerce.
Exascale computing - the ability to perform calculations at 1 billion billion per second - is what researchers are striving to push processors to do in the next decade. That's 1,000 times faster than the first petascale computer that came into existence in 2008.
Achieving efficiency will be paramount to building high-performance parallel computing systems if applications are to run in environments of enormous scale and also limited power.
The Virginia Tech researchers applied COS modeling to both Intel and IBM architectures and found that the error rate was as low as 7 percent on Intel systems and as high as 17 percent on IBM architecture. The team validated their models on 19 different applications as benchmarks. The application benchmarks used the following code: LULESH, AMGmk, Rodinia, and pF3D.