Back to Table of contents

Primeur weekly 2014-06-30

Special

Innovative HTC facilities needed to support computational genomics at the application and data level ...

HPC-assisted cell-based immunotherapy successful in curing melanoma ...

Restless hearts are simulated with real-time modelling at IBM Research in Zurich ...

A live report from the Adapteva A-1 "smallest supercomputer in the world" launch at ISC'14 ...

The Cloud

HP launches Helion Managed Services for optimizing Cloud storage workloads ...

Oracle unveils next generation Virtual Compute Appliance ...

Desktop Grids

ATLAS@Home crowd computing project launched at CERN ...

EuroFlash

a.s.r. uses ADVA Optical Networking's GNOC for critical network monitoring and maintenance ...

ADVA Optical Networking launches new era of data centre connectivity with Big Data transport solution ...

KTH wins from crystal clear insight into application performance with Allinea Performance Reports ...

Neurological simulation milestone reached after UCL embraces Allinea's tools on UK's largest supercomputer ...

Bright Computing is building on its success in data centres across Europe ...

UK Atomic Weapons Establishment launches SGI supercomputer ...

Spectra Logic tape library to archive the UK's fastest supercomputer ...

Physicists find way to boot up quantum computers 72 times faster than previously possible ...

In the fast lane: Mediatec uses Calibre UK LEDView530 scalers at FIA World Endurance Championships ...

The upcoming cybernetic age is one of intellectual capital ...

International Supercomputing Conference moves to Frankfurt, Germany in 2015 ...

USFlash

DataDirect Networks helps EMSL speed climate, energy and bioscience discoveries with high performance and massively scalable storage ...

Spectra and the Tandy Supercomputer shorten calculation rates from days to minutes, saving time and lives ...

New A*STAR-SMU centre combines high-powered computing and behavioural sciences to study people-centric issues ...

Scheduling algorithms based on game theory makes better use of computational resources ...

National Renewable Energy Laboratory supercomputer tackles power grid problems ...

Simulations helps scientists understand and control turbulence in humans and machines ...

Stampede supercomputer enables discoveries throughout science and engineering ...

Supercomputing simulations crucial to the study of Ras protein in determining anticancer drugs ...

Stampede supercomputer powers innovations in DNA sequencing technologies ...

Stampede supercomputer helps researchers design and test improved hurricane forecasting system ...

NSF-supported Stampede supercomputer powers innovations in materials science ...

D-Wave and predecessors: From simulated to quantum annealing ...

HPC server market shrinks -9.6% in the first quarter of 2014, according to IDC ...

University of Maryland's Deepthought2 debuts in global supercomputer rankings ...

Nine ways NSF-supported supercomputers help scientists understand and treat the disease ...

CAST releases wysiwyg R33 ...

Stampede supercomputer enables discoveries throughout science and engineering


Philipp Moesta, TAPIR, California Institute of Technology
19 Jun 2014 Arlington - Sometimes, the laboratory just won't cut it. After all, you can't recreate an exploding star, manipulate quarks or forecast the climate in the lab. In cases like these, scientists rely on supercomputing simulations to capture the physical reality of these phenomena - minus the extraordinary cost, dangerous temperatures or millennium-long wait times.

When faced with an unsolvable problem, researchers at universities and labs across the United States set up virtual models, determine the initial conditions for their simulations - the weather in advance of an impending storm, the configurations of a drug molecule binding to an HIV virus, the dynamics of a distant dying star - and press compute.

And then they wait as the Stampede supercomputer in Austin, Texas, crunches the complex mathematics that underlies the problems they are trying to solve.

By harnessing thousands of computer processors, Stampede returns results within minutes, hours or just a few days - compared to the months and years without the use of supercomputers, helping to answer science's - and society's - toughest questions.

Stampede is one of the most powerful supercomputers in the U.S. for open research, and currently ranks as the seventh most powerful in the world, according to the June 2014 TOP500 List. Able to perform nearly 10 trillion operations per second, Stampede is the most capable of the high-performance computing, visualization and data analysis resources within the National Science Foundation's (NSF) Extreme Science and Engineering Discovery Environment (XSEDE).

Stampede went into operation at the Texas Advanced Computing Center (TACC) in January 2013. The system is a cornerstone of NSF's investment in an integrated advanced cyberinfrastructure, which allows America's scientists and engineers to access cutting-edge computational resources, data and expertise to further their research across scientific disciplines.

At any given moment, Stampede is running hundreds of separate applications simultaneously. Approximately 3,400 researchers computed on the system in its first year, working on 1,700 distinct projects. The researchers came from 350 different institutions and their work spanned a range of scientific disciplines from chemistry to economics to artificial intelligence.

These researchers apply to use Stampede through the XSEDE project. Their intended use of Stampede is assessed by a peer review committee that allocates time on the system. Once approved, researchers are provided access to Stampede free of charge and tap into an ecosystem of experts, software, storage, visualization and data analysis resources that make Stampede one of the most productive, comprehensive research environments in the world. Training and educational opportunities are also available to help scientists use Stampede effectively.

"It was a fantastic first year for Stampede and we're really proud of what the system has accomplished", stated Dan Stanzione, acting director of TACC. "When we put Stampede together, we were looking for a general purpose architecture that would support everyone in the scientific community. With the achievements of its first year, we showed that was possible."

When the National Science Foundation (NSF) released their solicitation for proposals for a new supercomputer to be deployed in 2013, they were looking for a system that could support the day-to-day needs of a growing community of computational scientists, but also one that would push the field forward by incorporating new, emerging technologies.

"The model that TACC used, incorporating an experimental component embedded in a state-of-the-art usable system, is a very innovative choice and just right for the NSF community of researchers who are focused on both today's and tomorrow's scientific discoveries", stated Irene Qualters, division director for Advanced Cyberinfrastructure at NSF. "The results that researchers have achieved in Stampede's first year are a testimony to the system design and its appropriateness for the community."

"We wanted to put an innovative twist on our system and look at the next generation of capabilities", stated TACC's Dan Stanzione. "What we came up with is a hybrid system that includes traditional Intel Xeon E5 processors and also has an Intel Xeon Phi card on every node on the system, and a few of them with two.

The Intel Xeon Phi - aka the 'many integrated core (MIC) coprocessor' - squeezes 60 or more processors onto a single card. In that respect, it is similar to GPUs (graphics processing units), which have been used for several years to aid parallel processing in high-performance computing systems, as well as to speed up graphics and gaming capabilities in home computers. The advantage of the Xeon Phi is its ability to perform calculations quickly while consuming less energy.

"The Xeon Phi is Intel's approach to changing these power and performance curves by giving us simpler cores with a simpler architecture but a lot more of them in the same size package", Don Stanzione stated

As advanced computing systems grow more powerful, they also consume more energy - a situation that can be addressed by simpler, multicore chips. The Xeon Phi and other comparable technologies are believed to be critical to the effort to advance the field and develop future large-scale supercomputers.

"The exciting part is that MIC and GPU foreshadow what will be on the CPU in the future", Don Stanzione stated. "The work that scientists are putting in now to optimize codes for these processors will pay off. It's not whether you should adopt them; it's whether you want to get a jump on the future."

Though Xeon Phi adoption on Stampede started slowly, it now represents 10-20 percent of the usage of the system. Among the projects that have taken advantage of the Xeon Phi co-processor are efforts to develop new flu vaccines, simulations of the nucleus of the atom relevant to particle physics and a growing amount of weather forecasting.

The power of Stampede reaches beyond its ability to gain insight into our world through computational modelling and simulation. The system's diverse resources can be used to explore research in fields too complex to describe with equations, such as genomics, neuroscience and the humanities. Stampede's extreme scale and unique technologies enable researchers to process massive quantities of data and use modern techniques to analyze measured data to reach previously unachievable conclusions.

Stampede provides four capabilities that most data problems take advantage of. Leveraging 14 petabytes of high speed internal storage, users can process massive amounts of independent data on multiple processers at once, thus reducing the time needed for the data analysis or computation.

Researchers can use many data analysis packages optimized to run on Stampede by TACC staff to statistically or visually analyze their results. Staff also collaborates with researchers to improve their software and make it run more efficiently in a high-performance environment.

Data is rich and complex. When the individual data computations become so large that Stampede's primary computing resources cannot handle the load, the system provides users with 16 compute nodes with one terabyte of memory each. This enables researchers to perform complex data analyses using Stampede's diverse and highly flexible computing engine.

Once data has been parsed and analyzed, GPUs can be used remotely to explore data interactively without having to move large amounts of information to less-powerful research computers.

"The Stampede environment provides data researchers with a single system that can easily overcome most of the technological hurdles they face today, allowing them to focus purely on discovering results from their data-driven research", stated Niall Gaffney, TACC director of Data Intensive Computing.

Since it was deployed, Stampede has been in high demand. Ninety percent of the compute time on the system goes to researchers with grants from NSF or other federal agencies; the other 10 percent goes to industry partners and discretionary programmes.

"The system is utilized all the time - 24/7/365", Don Stanzione stated. "We're getting proposals requesting 500 percent of our time. The demand exceeds time allocated by 5-1. The community is hungry to compute."

Stampede will operate through 2017 and will be infused with second generation Intel Xeon Phi cards in 2015.

With a resource like Stampede in the community's hands, great discoveries await.

"Stampede's performance really helped push our simulations to the limit", stated Caltech astrophysicist Christian Ott who used the system to study supernovae. "Our research would have been practically impossible without Stampede."
Source: National Science Foundation - NSF

Back to Table of contents

Primeur weekly 2014-06-30

Special

Innovative HTC facilities needed to support computational genomics at the application and data level ...

HPC-assisted cell-based immunotherapy successful in curing melanoma ...

Restless hearts are simulated with real-time modelling at IBM Research in Zurich ...

A live report from the Adapteva A-1 "smallest supercomputer in the world" launch at ISC'14 ...

The Cloud

HP launches Helion Managed Services for optimizing Cloud storage workloads ...

Oracle unveils next generation Virtual Compute Appliance ...

Desktop Grids

ATLAS@Home crowd computing project launched at CERN ...

EuroFlash

a.s.r. uses ADVA Optical Networking's GNOC for critical network monitoring and maintenance ...

ADVA Optical Networking launches new era of data centre connectivity with Big Data transport solution ...

KTH wins from crystal clear insight into application performance with Allinea Performance Reports ...

Neurological simulation milestone reached after UCL embraces Allinea's tools on UK's largest supercomputer ...

Bright Computing is building on its success in data centres across Europe ...

UK Atomic Weapons Establishment launches SGI supercomputer ...

Spectra Logic tape library to archive the UK's fastest supercomputer ...

Physicists find way to boot up quantum computers 72 times faster than previously possible ...

In the fast lane: Mediatec uses Calibre UK LEDView530 scalers at FIA World Endurance Championships ...

The upcoming cybernetic age is one of intellectual capital ...

International Supercomputing Conference moves to Frankfurt, Germany in 2015 ...

USFlash

DataDirect Networks helps EMSL speed climate, energy and bioscience discoveries with high performance and massively scalable storage ...

Spectra and the Tandy Supercomputer shorten calculation rates from days to minutes, saving time and lives ...

New A*STAR-SMU centre combines high-powered computing and behavioural sciences to study people-centric issues ...

Scheduling algorithms based on game theory makes better use of computational resources ...

National Renewable Energy Laboratory supercomputer tackles power grid problems ...

Simulations helps scientists understand and control turbulence in humans and machines ...

Stampede supercomputer enables discoveries throughout science and engineering ...

Supercomputing simulations crucial to the study of Ras protein in determining anticancer drugs ...

Stampede supercomputer powers innovations in DNA sequencing technologies ...

Stampede supercomputer helps researchers design and test improved hurricane forecasting system ...

NSF-supported Stampede supercomputer powers innovations in materials science ...

D-Wave and predecessors: From simulated to quantum annealing ...

HPC server market shrinks -9.6% in the first quarter of 2014, according to IDC ...

University of Maryland's Deepthought2 debuts in global supercomputer rankings ...

Nine ways NSF-supported supercomputers help scientists understand and treat the disease ...

CAST releases wysiwyg R33 ...