Back to Table of contents

Primeur live 2011-06-22

Hardware

Panasas gives customers a treat with scalable ActivStor11 ...

Applications

University of Hamburg to simulate large data applications with Distributed Simulation and Virtual Reality environment ...

German automotive industry heavily relies on HLRS simulation expertise ...

TOP500

The Lomonosov Supercomputer Has Attained a New Level of Performance ...

Company news

Adaptive Computing announces 10x scalability for high throughput computing and next-generation HPC systems ...

University of Birmingham uses Adaptive Computing technology to reduce HPC costs ...

Adaptive Computing HPC Technology advances research capabilities for Europe's leading cancer research centre ...

Penguin Computing's public HPC Cloud is now powered by Adaptive Computin's Moab Cluster Suite ...

Numascale announces support for Supermicro 1042G-LTF and IBM System x3755 M3 with multi socket 8/12 core AMD Opteron 6000 Series ...

Mellanox expands availability of InfiniBand and Ethernet solutions with NEC LX Series Supercomputer through NEC HPC Europe ...

Mellanox and Lawrence Livermore National Laboratory demonstrate leading performance and scalability for HPC applications ...

Mellanox announces complete end-to-end FDR 56Gb/s InfiniBand interconnect solutions for uncompromised clustering performance and scalability ...

SGI to deliver breakthrough exascale computing solution ...

Platform's collaboration with CERN recognized as 2011 Computerworld Honors Laureate ...

Bright Computing, NVIDIA and NextIO to drive panel discussion on use of GPUs in HPC at International Supercomputing Conference in Germany ...

GE Global Research gets a Cray supercomputer ...

Mellanox announces complete end-to-end FDR 56Gb/s InfiniBand interconnect solutions for uncompromised clustering performance and scalability

20 Jun 2011 Hamburg - Mellanox has introduced a complete solution for FDR 56Gb/s InfiniBand consisting of adapter cards, switch systems, software and cables, becoming the first company to deliver a complete and robust end-to-end FDR InfiniBand infrastructure.

Mellanox's next-generation ConnectX-3 FDR 56Gb/s InfiniBand adapters, SX-6000 series switch systems, Unified Fabric Manager (UFM), Mellanox OS (MLNX-OS), software accelerators and FDR copper and fiber cables deliver the highest level of networking performance while reducing system power consumption. The combination enables cost-effective networking topologies for high-performance computing, financial services, database, Web 2.0, virtualized data centres and Cloud computing.

Mellanox's next-generation ConnectX-3 FDR 56Gb/s InfiniBand adapters, SX-6000 series switch systems, Unified Fabric Manager (UFM), Mellanox OS (MLNX-OS), software accelerators and FDR copper and fiber cables deliver the highest level of networking performance while reducing system power consumption. The combination enables cost-effective networking topologies for high-performance computing, financial services, database, Web 2.0, virtualized data centres and Cloud computing."Mellanox's complete end-to-end FDR 56Gb/s InfiniBand solutions deliver industry-leading performance and breakthrough application acceleration, providing important benefits across a range of applications and industries", stated Eyal Waldman, chairman, president and CEO of Mellanox Technologies. "Our end-to-end solutions for FDR lead the market in interconnect bandwidth and latency for superior server and storage clustering performance, power consumption, scalability and system reliability. We look forward to working with our customers to deliver the benefits of this next-generation of InfiniBand to the marketplace."

By delivering FDR 56Gb/s InfiniBand speeds, and with the support of PCIe Gen3, ConnectX-3 doubles the server and storage I/O throughput by eliminating the I/O bottleneck on next generation servers. Significant improvements in ConnectX-3's latency, reliability, throughput and scalability features, and industry-leading GPU and MPI acceleration engines supported in the Mellanox end-to-end FDR InfiniBand solution all contribute to deliver best-in-class performance and efficiency for both PCIe Gen2 and PCIe Gen3 systems. Applications utilizing TCP/UDP/IP transport can achieve industry-leading throughput over FDR 56Gb/s InfiniBand with Mellanox's VMA Messaging Acceleration software, further increasing performance for latency sensitive sockets applications.

Based on Mellanox’s 5th generation, performance-leading SwitchX switch silicon ICs, the SX-6000 series delivers more than 4Tb/s of non-blocking bandwidth with extremely low port-to-port latency in a single 1U device. The SX6000-series enables efficient computing for small to very large scale clusters with features such as static and adaptive routing, congestion control, and hardware-based forward error correction. These features, enabled by MLNX-OS, ensure maximum effective fabric bandwidth by eliminating network hot spots and guaranteeing the highest levels of reliability.

New features built into the SX silicon, including hardware-based InfiniBand routers and gateways to Ethernet and Fibre Channel, provide opportunities for further scalability and convergence. The SX6000 series can also be coupled with Mellanox's Unified Fabric Manager (UFM) software, which provides advanced fabric wide monitoring and provisioning. UFM takes a logical and virtualized view of fabric resources to deliver unmatched control over the fabric in optimizing application performance and uptime. Whether used for parallel computation or as a converged fabric, the SX6000-series provides the industry's highest traffic-carrying capacity, making it easy to build clusters that can scale-out to thousands and tens-of-thousands of nodes. It offers superior price-to-performance and energy-to-performance, reduces capital and operating expenses, and provides the best return-on-investment.

Mellanox FDR passive copper and active optical cables provide best-in-class performance and link reliability. Mellanox FDR passive copper cables, with lengths up to 5 meters, allow a cost-effective, low power interconnect solution for in-the-rack connectivity. Mellanox FDR active optical cables are a complete and enclosed solution designed for longer reach requirements, enabling connectivity of large scale systems and switches. Mellanox FDR cables, with their extended reliability and production testing, guarantee a robust and flawless installation and interoperable solution by maintaining a reliable link at 56Gb/s with Bit Error Rate better than 10-15.

The Mellanox end-to-end ConnectX-3 and SwitchX-based FDR solutions offer a unique fabric architecture, providing cost-effective, high-message rate, reliable and energy efficient connectivity for both PCIe Gen2 & Gen3 servers. FDR throughput and performance can be maximized with a non-blocking architecture or leveraged for improved performance, power and economics using oversubscribed architectures in PCIe Gen2-based systems.

The ConnectX-3 adapter cards, FDR passive copper and optical cables and SX6000-series switches are sampling today with general availability in the second half of 2011.
Source: Mellanox

Back to Table of contents

Primeur live 2011-06-22

Hardware

Panasas gives customers a treat with scalable ActivStor11 ...

Applications

University of Hamburg to simulate large data applications with Distributed Simulation and Virtual Reality environment ...

German automotive industry heavily relies on HLRS simulation expertise ...

TOP500

The Lomonosov Supercomputer Has Attained a New Level of Performance ...

Company news

Adaptive Computing announces 10x scalability for high throughput computing and next-generation HPC systems ...

University of Birmingham uses Adaptive Computing technology to reduce HPC costs ...

Adaptive Computing HPC Technology advances research capabilities for Europe's leading cancer research centre ...

Penguin Computing's public HPC Cloud is now powered by Adaptive Computin's Moab Cluster Suite ...

Numascale announces support for Supermicro 1042G-LTF and IBM System x3755 M3 with multi socket 8/12 core AMD Opteron 6000 Series ...

Mellanox expands availability of InfiniBand and Ethernet solutions with NEC LX Series Supercomputer through NEC HPC Europe ...

Mellanox and Lawrence Livermore National Laboratory demonstrate leading performance and scalability for HPC applications ...

Mellanox announces complete end-to-end FDR 56Gb/s InfiniBand interconnect solutions for uncompromised clustering performance and scalability ...

SGI to deliver breakthrough exascale computing solution ...

Platform's collaboration with CERN recognized as 2011 Computerworld Honors Laureate ...

Bright Computing, NVIDIA and NextIO to drive panel discussion on use of GPUs in HPC at International Supercomputing Conference in Germany ...

GE Global Research gets a Cray supercomputer ...