Quantum computers have the potential to solve problems that are beyond the reach of even today's biggest supercomputers, in areas such as drug modelling and optimisation.
"Our approach gives a way to generate a proof that a computation was correct, after it has been completed", explained Joseph Fitzsimons, a Principal Investigator at the Centre for Quantum Technologies at the National University of Singapore and Assistant Professor at the Singapore University of Technology and Design.
Joseph Fitzsimons carried out the work with colleague Michal Hajdusek and collaborator Tomoyuki Morimae, who is at Kyoto University in Japan. Their proposals have been published on January 22, 2018 inPhysical Review Letters.
Quantum computers today are bulky, specialised machines that require careful maintenance, meaning that people are more likely to access machines owned and operated by a third party than to have their own - like a quantum version of a Cloud service.
Customers sending off data and programmes to a quantum computer will want to check that their instructions have been carried out as they intended. This problem of verification has been tackled before, but previous solutions required the customer to interact with the quantum computer while it was running the computation.
That kind of back-and-forth communication isn't necessary in the new scheme. "If you receive a result that look fishy, you can choose to verify the result, essentially retrospectively", stated Joseph Fitzsimons. Verification guards against a quantum computer that does not perform correctly because of an accidental fault or even malicious tampering.
The improvement comes from how the calculation is checked. "The approach is completely different. We try to produce a state which can be used as a witness to the correctness of the computation. The previous approaches had some kind of trap built into the computation that gets checked as you go along", explained Joseph Fitzsimons.
The witness state registers each step of the computation. This means it must have as many bits as the computation has steps. For example, if a computation has 1000 steps, on 100 qubits, the witness would need to be 1100 qubits long.
The research team present two post-hoc verification schemes, based on different ways of testing the witness state. The first requires the customer to be able to send and measure quantum bits. In practice, this means they would need some specialized hardware and a line for sending these qubits to the owner of the quantum computer. The customer then measures the witness directly.
In the second scheme, the customer can be without any quantum tools - communication over the regular internet would do - but the quantum computer doing the calculation must be networked with five other quantum computers that help to check the witness state, playing a role as provers.
"It will be difficult to do an experiment to demonstrate post-hoc verification, but maybe not impossible", stated Joseph Fitzsimons. A challenge is the size of the quantum computers available today - the biggest are around 50 qubits. Another is that the networked setups required for the prover schemes don't exist - at least not yet.
The researchers wrap up their paper by pointing out an interesting advantage of the post-hoc verification scheme: It's not only the customer who could check that a computation was carried out correctly. The scheme allows 'public verifiability'. The witness could be checked by a trusted third party, such as a court. This could protect the company if, say, a customer claimed the computation was not done correctly to avoid paying for the service.
The paper titled " Post hoc verification of quantum computation " has been published inPhysical Review Letters120, 040501 (2018).
This paper combines results found in the following preprints: " Post hoc verification of quantum computation " by Joseph F. Fitzsimons and Michal Hajduek and " Post hoc verification with a single prover " by Tomoyuki Morimae and Joseph F. Fitzsimons.