26 May 2015 Cambridge - *Quantum computers are largely theoretical devices that could perform some computations exponentially faster than conventional computers can. Crucial to most designs for quantum computers is quantum error correction, which helps preserve the fragile quantum states on which quantum computation depends.*

The ideal quantum error correction code would correct any errors in quantum data, and it would require measurement of only a few quantum bits, or qubits, at a time. But until now, codes that could make do with limited measurements could correct only a limited number of errors - one roughly equal to the square root of the total number of qubits. So they could correct eight errors in a 64-qubit quantum computer, for instance, but not 10.

In a paper they're presenting at the Association for Computing Machinery's Symposium on Theory of Computing in June, researchers from MIT, Google, the University of Sydney, and Cornell University present a new code that can correct errors afflicting a specified fraction of a computer's qubits, not just the square root of their number. And that fraction can be arbitrarily large, although the larger it is, the more qubits the computer requires.

"There were many, many different proposals, all of which seemed to get stuck at this square-root point", stated Aram Harrow, an assistant professor of physics at MIT, who led the research. "So going above that is one of the reasons we're excited about this work."

Like a bit in a conventional computer, a qubit can represent 1 or 0, but it can also inhabit a state known as "quantum superposition", where it represents 1 and 0 simultaneously. This is the reason for quantum computers' potential advantages: A string of qubits in superposition could, in some sense, perform a huge number of computations in parallel.

Once you perform a measurement on the qubits, however, the superposition collapses, and the qubits take on definite values. The key to quantum algorithm design is manipulating the quantum state of the qubits so that when the superposition collapses, the result is - with high probability - the solution to a problem.

But the need to preserve superposition makes error correction difficult. "People thought that error correction was impossible in the '90s", Aram Harrow explained. "It seemed that to figure out what the error was you had to measure, and measurement destroys your quantum information."

The first quantum error correction code was invented in 1994 by Peter Shor, now the Morss Professor of Applied Mathematics at MIT, with an office just down the hall from Aram Harrow's. Peter Shor is also responsible for the theoretical result that put quantum computing on the map, an algorithm that would enable a quantum computer to factor large numbers exponentially faster than a conventional computer can. In fact, his error-correction code was a response to skepticism about the feasibility of implementing his factoring algorithm.

Peter Shor's insight was that it's possible to measure relationships between qubits without measuring the values stored by the qubits themselves. A simple error-correcting code could, for instance, instantiate a single qubit of data as three physical qubits. It's possible to determine whether the first and second qubit have the same value, and whether the second and third qubit have the same value, without determining what that value is. If one of the qubits turns out to disagree with the other two, it can be reset to their value.

In quantum error correction, Aram Harrow explained, "These measurement always have the form 'Does A disagree with B?' Except it might be, instead of A and B, A B C D E F G, a whole block of things. Those types of measurements, in a real system, can be very hard to do. That's why it's really desirable to reduce the number of qubits you have to measure at once."

A quantum computation is a succession of states of quantum bits. The bits are in some state; then they're modified, so that they assume another state; then they're modified again; and so on. The final state represents the result of the computation.

In their paper, Aram Harrow and his colleagues assign each state of the computation its own bank of qubits; it's like turning the time dimension of the computation into a spatial dimension. Suppose that the state of qubit 8 at time 5 has implications for the states of both qubit 8 and qubit 11 at time 6. The researchers' protocol performs one of those agreement measurements on all three qubits, modifying the state of any qubit that's out of alignment with the other two.

Since the measurement doesn't reveal the state of any of the qubits, modification of a misaligned qubit could actually introduce an error where none existed previously. But that's by design: The purpose of the protocol is to ensure that errors spread through the qubits in a lawful way. That way, measurements made on the final state of the qubits are guaranteed to reveal relationships between qubits without revealing their values. If an error is detected, the protocol can trace it back to its origin and correct it.

It may be possible to implement the researchers' scheme without actually duplicating banks of qubits. But, Aram Harrow said, some redundancy in the hardware will probably be necessary to make the scheme efficient. How much redundancy remains to be seen: Certainly, if each state of a computation required its own bank of qubits, the computer might become so complex as to offset the advantages of good error correction.

But Aram Harrow stated: "Almost all of the sparse schemes started out with not very many logical qubits, and then people figured out how to get a lot more. Usually, it's been easier to increase the number of logical qubits than to increase the distance - the number of errors you can correct. So we're hoping that will be the case for ours, too."

Updates to HP Helion portfolio help customers unlock value of a hybrid infrastructure ...

IBM acquires Blue Box to accelerate open hybrid Clouds ...

GBP 313 million boost For UK Big Data research ...

Results of the CRISMA EU project help rescue teams prepare for catastrophes ...

A powerful HMMER for data mining to find equence relationships deep in evolutionary time ...

Eight million euro for visual computing ...

Still time to save on ISC 2015 registration Register online by June 10 ...

Cray global growth continues with opening of new EMEA headquarters in the United Kingdom ...

PRACE supercomputers help researchers calculate mass difference between neutrons and protons ...

EGI-INSPIRE: Building the digital European Research Area from the ground up ...

New ERA Roadmap offers synergies with Science Europe Roadmap ...

PRACE awards second phase contracts for Pre-Commercial Procurement ...

Open applications for programme "Master in High Performance Computing (MHPC)" now available ...

Entangled photons unlock new super-sensitive characterisation of quantum technology ...

Mainz physicists provide important component for the Large Hadron Collider at CERN ...

New version ArrayFire v3.0 released ...

Three Princeton projects awarded supercomputer hours ...

UC Merced selects Advanced Clustering to build supercomputer ...

NCSA scales CD-adapco's STAR-CCM+ to new world record of 102,000 cores on Blue Waters ...

Former P&G exec brings savvy to OSC industry engagement efforts ...

Supernova hunting with supercomputers ...

Physicists eager to begin analysis of data from new, higher energy run of LHC ...

Meraculous: Deciphering the 'book of life' with supercomputers ...

NEC launches SDN-compatible switches for large data centres ...

Advance in quantum error correction ...

Donuts, math, and superdense teleportation of quantum information ...