Back to Table of contents

Primeur weekly 2017-04-18

Quantum computing

QxBranch and Commonwealth Bank Australia launch quantum computing simulator ...

Indistinguishable photons key to advancing quantum technologies ...

Recent advances and new insights into quantum image processing ...

Focus on Europe

Teratec 2017 Forum issues Call for Participation ...

Hazel Hen helps explain ultrafast phase transition ...

Hardware

Engility to pursue NASA advanced computing services opportunity ...

DDN names Jessica Popp General Manager of IME business unit ...

Eni fires up its HPC3, the new hybrid high performance computer for E&P activities ...

DDN advances object storage performance and delivers industry's most flexible and cost-effective data protection ...

Asetek to receive RackCDU D2C order for new HPC installation ...

PSNC deploys ADVA Optical Networking 96-channel 100G core solution in pan-European research network ...

Putting a spin on logic gates ...

Tool for checking complex computer architectures reveals flaws in emerging design ...

System better allots network bandwidth, for faster page loads ...

Applications

SDSC to enhance campus research computing resources for bioinformatics ...

U.S. Department of Energy's INCITE programme seeks advanced computational research proposals for 2018 ...

Tutorials schedule announced for PEARC17 ...

Fujitsu awarded three prizes for science and technology from MEXT ...

Fujitsu and Grid partner to jointly develop AI services ...

IBM brings Anaconda Open Data Science platform to IBM Cognitive Systems ...

Jefferson Lab scientists eavesdrop on chatter of sub-atomic world ...

Buckle up - Climate change to increase severe aircraft turbulence ...

Beyond the frontiers of Supercomputing ...

Scientists develop a novel algorithm, inspired on the behaviour of bee colonies, which will help dismantling criminal social networks ...

The Cloud

Atos leads C2NET consortium - the first collaborative Cloud-based platform for SMEs to support manufacturing management ...

Comcast Business now provides enterprises with dedicated links to IBM Cloud ...

Nimbix ushers in next-generation GPUs for Cloud-based deep learning ...

USFlash

Group works toward devising topological superconductor ...

Stanford researchers create deep learning algorithm that could boost drug development ...

Biased bots: Human prejudices sneak into artificial intelligence systems ...

System better allots network bandwidth, for faster page loads

28 Mar 2017 Cambridge - A webpage today is often the sum of many different components. A user's home page on a social-networking site, for instance, might display the latest posts from the users' friends; the associated images, links, and comments; notifications of pending messages and comments on the user's own posts; a list of events; a list of topics currently driving online discussions; a list of games, some of which are flagged to indicate that it's the user's turn; and of course the all-important ads, which the site depends on for revenues.

With increasing frequency, each of those components is handled by a different programme running on a different server in the website's data centre. That reduces processing time, but it exacerbates another problem: the equitable allocation of network bandwidth among programmes.

Many websites aggregate all of a page's components before shipping them to the user. So if just one programme has been allocated too little bandwidth on the data centre network, the rest of the page - and the user - could be stuck waiting for its component.

At the Usenix Symposium on Networked Systems Design and Implementation, researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) have presented a new system for allocating bandwidth in data centre networks. In tests, the system maintained the same overall data transmission rate - or network "throughput" - as those currently in use, but it allocated bandwidth much more fairly, completing the download of all of a page's components up to four times as quickly.

"There are easy ways to maximize throughput in a way that divides up the resource very unevenly", stated Hari Balakrishnan, the Fujitsu Professor in Electrical Engineering and Computer Science and one of two senior authors on the paper describing the new system. "What we have shown is a way to very quickly converge to a good allocation."

Joining Hari Balakrishnan on the paper titled " Flowtune: Flowlet control for datacenter networks " are first author Jonathan Perry, a graduate student in electrical engineering and computer science, and Devavrat Shah, a professor of electrical engineering and computer science.

Most networks regulate data traffic using some version of the transmission control protocol, or TCP. When traffic gets too heavy, some packets of data don't make it to their destinations. With TCP, when a sender realizes its packets aren't getting through, it halves its transmission rate, then slowly ratchets it back up. Given enough time, this procedure will reach an equilibrium point at which network bandwidth is optimally allocated among senders.

But in a big website's data centre, there's often not enough time. "Things change in the network so quickly that this is inadequate", Jonathan Perry stated. "Frequently it takes so long that - the transmission rates - never converge, and it's a lost cause."

TCP gives all responsibility for traffic regulation to the end users because it was designed for the public internet, which links together thousands of smaller, independently owned and operated networks. Centralizing the control of such a sprawling network seemed infeasible, both politically and technically.

But in a data centre, which is controlled by a single operator, and with the increases in the speed of both data connections and computer processors in the last decade, centralized regulation has become practical. The CSAIL researchers' system is a centralized system.

The system, dubbed Flowtune, essentially adopts a market-based solution to bandwidth allocation. Operators assign different values to increases in the transmission rates of data sent by different programmes. For instance, doubling the transmission rate of the image at the center of a webpage might be worth 50 points, while doubling the transmission rate of analytics data that's reviewed only once or twice a day might be worth only 5 points.

As in any good market, every link in the network sets a "price" according to "demand" - that is, according to the amount of data that senders collectively want to send over it. For every pair of sending and receiving computers, Flowtune then calculates the transmission rate that maximizes total "profit", or the difference between the value of increased transmission rates - the 50 points for the picture versus the 5 for the analytics data - and the price of the requisite bandwidth across all the intervening links.

The maximization of profit, however, changes demand across the links, so Flowtune continually recalculates prices and on that basis recalculates maximum profits, assigning the resulting transmission rates to the servers sending data across the network.

The paper also describes a new procedure that the researchers developed for allocating Flowtune's computations across cores in a multi-core computer, to boost efficiency. In experiments, the researchers compared Flowtune to a widely used variation on TCP, using data from real data centres. Depending on the data set, Flowtune completed the slowest 1 percent of data requests nine to 11 times as rapidly as the existing system.
Source: Massachusetts Institute of Technology - MIT

Back to Table of contents

Primeur weekly 2017-04-18

Quantum computing

QxBranch and Commonwealth Bank Australia launch quantum computing simulator ...

Indistinguishable photons key to advancing quantum technologies ...

Recent advances and new insights into quantum image processing ...

Focus on Europe

Teratec 2017 Forum issues Call for Participation ...

Hazel Hen helps explain ultrafast phase transition ...

Hardware

Engility to pursue NASA advanced computing services opportunity ...

DDN names Jessica Popp General Manager of IME business unit ...

Eni fires up its HPC3, the new hybrid high performance computer for E&P activities ...

DDN advances object storage performance and delivers industry's most flexible and cost-effective data protection ...

Asetek to receive RackCDU D2C order for new HPC installation ...

PSNC deploys ADVA Optical Networking 96-channel 100G core solution in pan-European research network ...

Putting a spin on logic gates ...

Tool for checking complex computer architectures reveals flaws in emerging design ...

System better allots network bandwidth, for faster page loads ...

Applications

SDSC to enhance campus research computing resources for bioinformatics ...

U.S. Department of Energy's INCITE programme seeks advanced computational research proposals for 2018 ...

Tutorials schedule announced for PEARC17 ...

Fujitsu awarded three prizes for science and technology from MEXT ...

Fujitsu and Grid partner to jointly develop AI services ...

IBM brings Anaconda Open Data Science platform to IBM Cognitive Systems ...

Jefferson Lab scientists eavesdrop on chatter of sub-atomic world ...

Buckle up - Climate change to increase severe aircraft turbulence ...

Beyond the frontiers of Supercomputing ...

Scientists develop a novel algorithm, inspired on the behaviour of bee colonies, which will help dismantling criminal social networks ...

The Cloud

Atos leads C2NET consortium - the first collaborative Cloud-based platform for SMEs to support manufacturing management ...

Comcast Business now provides enterprises with dedicated links to IBM Cloud ...

Nimbix ushers in next-generation GPUs for Cloud-based deep learning ...

USFlash

Group works toward devising topological superconductor ...

Stanford researchers create deep learning algorithm that could boost drug development ...

Biased bots: Human prejudices sneak into artificial intelligence systems ...