"Since 2006 Intel and the IA developer community have worked in partnership to realize the potential of multi- and many-core computing, with accelerating impact beyond high-performance computing to solving a wide range of real-world computing problems on clients and servers", Justin Rattner stated during his Day 3 keynote in San Francisco. "What we have demonstrated today only scratches the surface of what will be possible with many-core and extreme scale computing systems in the future."
Intel continues to push tech beyond today's limits, looking for the next big leaps that take computing to the next levels of performance with much less power consumption than is possible today. As an example, Justin Rattner demonstrated a Near-Threshold Voltage Processor using novel, ultra-low voltage circuits that dramatically reduce energy consumption by operating close to threshold, or turn-on voltage, of the transistors.
This concept CPU runs fast when needed but drops power to below 10 milliwatts when its workload is light - low enough to keep running while powered only by a solar cell the size of a postage stamp. While the research chip will not become a product itself, the results of this research could lead to the integration of scalable near-threshold voltage circuits across a wide range of future products, reducing power consumption by 5-fold or more and extending always-on capability to a wider range of computing devices. Technologies such as this will further Intel Labs' goal to reduce energy consumption per computation by 100- to1000-fold for applications ranging from massive data processing at one end of the spectrum to terascale-in-a-pocket at the other.
The Hybrid Memory Cube, a concept DRAM developed by Micron in collaboration with Intel, demonstrates a new approach to memory design delivering a 7-fold improvement in energy-efficiency over today's DDR3. Hybrid Memory Cube uses a stacked memory chip configuration, forming a compact "cube", and uses a new, highly efficient memory interface which sets the bar for energy consumed per bit transferred while supporting data rates of one trillion bits per second. This research could lead to dramatic improvements in servers optimized for cloud computing as well as ultrabooks, televisions, tablets and smartphones.
Multi-core, the practice of building more than one processing engine into a single chip, has become the accepted method to increase performance while keeping power consumption low. While many-core is more of a design perspective, rather than incrementally adding cores in a traditional approach, it's reinventing chip design based on the assumption that high core counts is the new norm.
Justin Rattner highlighted the progress multi-core computing has seen since he introduced Intel's first dual-core processor at IDF 5 years ago. Today Intel's multi- and many-core processors are hosting a myriad of important applications across a wide range of industry sectors, including some surprising new uses in the rapidly advancing world of high-core-count computing.
Justin Rattner described some of the latest applications of this technology along with the software tools and programming techniques that are enabling developers to harness the power of multi- and many-core computing in several key areas, including: