The project jelled from a timely confluence of three major elements.
The first one is the growing affordability and availability of very high transmission capacity. This has been observed in National R&E Networks (NRENs) and increasingly between nations, both regionally and on an intercontinental scale. This was exemplified by the success of the transatlantic ANA100 project linking New York Manlan and Netherlight in the Netherlands at 100 Gbps for the Terena meeting in Maastricht in June 2013, linking Internet2, ESnet and Canarie, on the North American side, with SURFnet, Nordunet and Dante in Europe. The one year trial that followed was conclusive and will result in the probable launch of 5 or 6 transatlantic 100 Gbps circuits by the end of 2014. The next frontier is transpacific where at the APAN meeting this February, the ACA-100 challenge was launched - Asia connects America at 100 Gbps - with SC14 in New Orleans in November as deadline and eliciting interest and co-operation of R&E partners and industry in Australasia to emulate the six pioneers who made ANA-100 a reality.
The second one is the advent and coming of age of Long Distance InfiniBand. The progress of InfiniBand as the protocol of choice in High Performance Computing is evidenced by the rankings in the TOP500 annual Super Computing reports. The development of technologies capable of transparently extending InfiniBand over arbitrary distances was pioneered by Obsidian Strategics in response to mission critical global communications requirements from the US government. Native InfiniBand trials have been demonstrated at a number of SC events including a link between NASA Ames and Goddard Space Flight Center where encrypted point to point native Long Distance InfiniBand delivers a 30-fold increase in effective throughtput.
A Phoenix-based world leading genomics institute- Translational Genomics - has successfully deployed encrypted wide area InfiniBand to connect TGen's remote sequencer wet labs with HPC clusters at the Arizona State University (ASU) resulting in dramatic performance improvements. Obsidian's Crossbow InfiniBand routers combined with BGFC subnet management serve to carry long distance InfiniBand from relatively simple to advanced network designs, enabling multi-subnet capabilities and complex topologies which has added the dimension needed for the third element: InfiniCortex.
This third element indeed is the InfiniCortex project. A*STAR Computational Resource Centre in Singapore conceived a Galaxy of supercomputers and is developing the necessary mathematical tools and related software. Individual SC's located at different geo-locations are connected into a Super-Graph. They may have arbitrary interconnect topologies and the Galaxy is based on a topology with small diameter and lowest possible link number. In terms of graph representation, it is an embedding of graphs representing Supercomputers' topologies into a graph representing the Galaxy topology.
The first phase of testing is currently under way in Singapore linking first two and then three HPC clusters connected to each other via dark fibre using the Singaren National R&E network. The second phase is scheduled to start during summer using the existing Singaren to JGN-X Japan 10 Gbps link to interconnect A*STAR and the Tokyo Institute of Technology Tsubame-KFC supercomputer. The third phase scheduled to start in September will test the viability of InfiniBand over a more than 15.000 km long 10 Gbps linking A*STAR to Oak Ridge National Laboratories (ORNL) Titan supercomputer in Tennessee and to Stony Brook University in New York State. Efforts are under way to line up further partners to see this testing phase culminate with a 100 Gbps connection between Singapore and the USA at SC 2014 in New Orleans in November demonstrating multiple HPC sites collaborating as one.
The InfiniCortex project team will organize a meeting at ISC'14 in Leipzig on June 25 from 8 to 10 am in lecture room 11.