4 Apr 2016 Sunnyvale, Yokneam - Mellanox Technologies Ltd., a supplier of high-performance, end-to-end interconnect solutions for data centre servers and storage systems, has introduced a new line of InfiniBand router systems. The new EDR 100Gb/s InfiniBand Routers enable a new level of scalability critical for the next generation of mega data-centre deployments as well as expanded capabilities for data center isolations between different users and applications. The network router delivers a consistent, high-performance and low latency router solution that is mission critical for high performance computing (HPC), Cloud, Web 2.0, machine learning and enterprise applications.
Mellanox's SB7780 InfiniBand Router family is based on the Switch-IB switch ASIC and offers fully flexible 36 EDR 100Gb/s ports, which can be split among six different subnets. The InfiniBand Router brings two major enhancements to the Mellanox switch portfolio:
The SB7780 InfiniBand Router can connect between different types of topologies. Therefore, it enables each subnet topology to best fit and maximize each applications' performance. For example, the storage subnets may use a Fat-Tree topology while the compute subnets may use 3D-torus, DragonFly+, Fat-Tree or other topologies that best fit the local application. The SB7780 can also help split the cluster in order to segregate between applications that run best on localized resources and between applications that require a full fabric.
"The SB7780 InfiniBand Router adds another layer to Mellanox's solutions that pave the road to Exascale solutions", stated Gilad Shainer, vice president of marketing at Mellanox. "This new InfiniBand Router gives us the ability to scale up to a virtually unlimited number of nodes and yet sustain the data processing demands of machine learning, IoT, HPC and cloud applications. Mellanox's EDR 100Gb/s InfiniBand solutions, together with the SB7780 router, represent the only scalable solution currently available on the market that support these needs."
"This new technology will allow us to enable isolation between high-performance compute systems while allowing access to our centre-wide storage resources, and allow us to continue to expand our connectivity to meet future needs", stated Scott Atchley, HPC Systems Engineer at Oak Ridge National Laboratory.