Back to Table of contents

Primeur weekly 2012-07-02

Exascale supercomputing

Ultra fast supercomputers at UK lab will better prepare us for severe weather and save millions of pounds ...

The Cloud

HP expands t410 Smart Zero Client family ...

Opscode announces integration with Google Compute Engine ...

Red Hat Cloud Ecosystem Gains Global Momentum ...

Red Hat to acquire FuseSource ...

Imperial College London and University of Cambridge launch CORE to deliver unrivalled UK e-Infrastructure capability to industry ...

Valeo choses Agarik to host secure portal to its office automation Cloud ...

IEEE brings Cloud computing expertise and user resources together to foster worldwide collaboration and innovation ...

Desktop Grids

One billion results returned by World Community Grid volunteers ...

EuroFlash

French Ministry of Culture and Communication gives Bull its approval to preserve public archives on digital media ...

STFC's Joule in the crown is UK's most powerful supercomputer ...

PRACE looks back on successful Scientific Conference at ISC'12 ...

CERN to give update on Higgs search as curtain raiser to ICHEP conference ...

Bull-Joseph Fourier Prize 2012 recognizes three scientific teams for their advances in research and innovation ...

USFlash

Cray to add Intel Xeon Phi coprocessors to its next-generation Cascade supercomputer ...

Cray signs $40 million supercomputer agreement with the National Energy Research Scientific Computing Center (NERSC) ...

YarcData kicks off the $100,000 Graph Analytics Challenge and announces contest judges ...

Graph500 adds new measurement of supercomputing performance ...

Health care publisher Lifescript selects HP 3PAR Storage to expand offerings and enhance customer service ...

IBM and Lawrence Livermore researchers form Deep Computing Solutions Collaboration to help boost industrial competitiveness ...

RIKEN and Fujitsu complete operational testing of the K computer ...

DataDirect Networks powers industrial innovation with NCSA Private Sector Programme ...

Fujitsu wins supercomputer bid from Taiwan's Central Weather Bureau ...

Reaching and researching between stars ...

BGI demonstrated genomic data transfer at nearly 10 gigabits per second between US and China ...

Graph500 adds new measurement of supercomputing performance

25 Jun 2012 Albuquerque - Supercomputing performance is getting a new measurement with the Graph500 executive committee's announcement of specifications for a more representative way to rate the large-scale data analytics at the heart of high-performance computing. An international team that includes Sandia National Laboratories announced the single-source shortest-path specification to assess computing performance at the International Supercomputing Conference in Hamburg, Germany.

The latest benchmark "highlights the importance of new systems that can find the proverbial needle in the haystack of data", stated Graph500 executive committee member David A. Bader, a professor in the School of Computational Science and Engineering and executive director of High-Performance Computing at the Georgia Institute of Technology.

The new specification will measure the closest distance between two things, said Sandia National Laboratories researcher Richard Murphy, who heads the executive committee. For example, it would seek the smallest number of people between two people chosen randomly in the professional network LinkedIn, finding the fewest friend of a friend of a friend links between them, he said.

Graph500 already gauges two computational techniques, called kernels: a large graph that links huge numbers of participants and a parallel search of that graph. The first two kernels were relatively easy problems; this third one is harder, Richard Murphy said. Once it's been tested, the next kernel will be harder still, he said.

The rankings are oriented toward enormous graph-based data problems, a core part of most analytics workloads. Graph500 rates machines on their ability to solve complex problems that have seemingly infinite numbers of components, rather than ranking machines on how fast they solve those problems.

Big data problems represent a $270 billion market and are increasingly important for businesses such as Google, Facebook and LexisNexis, Richard Murphy said.

Large data problems are especially important in cybersecurity, medical informatics, data enrichment, social networks and symbolic networks. Last year, the Obama administration announced a push to develop better big data systems.

Problems that require enormously complex graphs include correlating medical records of millions of patients, analyzing ever-growing numbers of electronically related participants in social media and dealing with symbolic networks, such as tracking tens of thousands of shipping containers of goods roaming the world's oceans.

Medical-related data alone could potentially overwhelm all of today’s high-performance computing, Richard Murphy said.

Graph500's steering committee is made up of more than 30 international experts in high-performance computing who work on what benchmarks supercomputers should meet in the future. The executive committee, which implements changes in the benchmark, includes Sandia, Argonne National Laboratory, Georgia Institute of Technology and Indiana University.

David Bader said emerging applications in health care informatics, social network analysis, web science and detecting anomalies in financial transactions "require a new breed of data-intensive supercomputers that can make sense of massive amounts of information".

But performance can’t be improved without a meaningful benchmark, Richard Murphy said. "The whole goal is to spur industry to do something harder" as they jockey for top rankings, he said. "If there's a change in the list over time - and there should be - it's a big deal", he added.

Richard Murphy sees Graph500 as a complementary performance yardstick to the well-known Top 500 rankings of supercomputer performance, based on speed processing the Linpack code. Nine computers made the first Graph500 list in November 2010; by last November, the number had grown to 50. Its fourth list, released at the conference in Germany, ranked 88. Rankings are released twice a year at the Supercomputing Conference in November and the International Supercomputing Conference in June.

"A machine on the top of this list may analyze huge quantities of data to provide better and more personalized health care decisions, improve weather and climate prediction, improve our cybersecurity and better integrate our on-line social networks with our personal lives", David Bader stated.
Source: Sandia National Laboratories

Back to Table of contents

Primeur weekly 2012-07-02

Exascale supercomputing

Ultra fast supercomputers at UK lab will better prepare us for severe weather and save millions of pounds ...

The Cloud

HP expands t410 Smart Zero Client family ...

Opscode announces integration with Google Compute Engine ...

Red Hat Cloud Ecosystem Gains Global Momentum ...

Red Hat to acquire FuseSource ...

Imperial College London and University of Cambridge launch CORE to deliver unrivalled UK e-Infrastructure capability to industry ...

Valeo choses Agarik to host secure portal to its office automation Cloud ...

IEEE brings Cloud computing expertise and user resources together to foster worldwide collaboration and innovation ...

Desktop Grids

One billion results returned by World Community Grid volunteers ...

EuroFlash

French Ministry of Culture and Communication gives Bull its approval to preserve public archives on digital media ...

STFC's Joule in the crown is UK's most powerful supercomputer ...

PRACE looks back on successful Scientific Conference at ISC'12 ...

CERN to give update on Higgs search as curtain raiser to ICHEP conference ...

Bull-Joseph Fourier Prize 2012 recognizes three scientific teams for their advances in research and innovation ...

USFlash

Cray to add Intel Xeon Phi coprocessors to its next-generation Cascade supercomputer ...

Cray signs $40 million supercomputer agreement with the National Energy Research Scientific Computing Center (NERSC) ...

YarcData kicks off the $100,000 Graph Analytics Challenge and announces contest judges ...

Graph500 adds new measurement of supercomputing performance ...

Health care publisher Lifescript selects HP 3PAR Storage to expand offerings and enhance customer service ...

IBM and Lawrence Livermore researchers form Deep Computing Solutions Collaboration to help boost industrial competitiveness ...

RIKEN and Fujitsu complete operational testing of the K computer ...

DataDirect Networks powers industrial innovation with NCSA Private Sector Programme ...

Fujitsu wins supercomputer bid from Taiwan's Central Weather Bureau ...

Reaching and researching between stars ...

BGI demonstrated genomic data transfer at nearly 10 gigabits per second between US and China ...