A hybrid quantum and classical approach may be the answer to tackling this problem with existing quantum hardware. Researchers at the U.S. Department of Energy's (DOE) Argonne National Laboratory and Los Alamos National Laboratory, along with researchers at Clemson University and Fujitsu Laboratories of America, have developed hybrid algorithms to run on quantum machines and have demonstrated them for practical applications using IBM quantum computers and a D-Wave quantum computer.
The team's work is presented in an article entitled " A Hybrid Approach for Solving Optimization Problems on Small Quantum Computers " that appears in the June 2019 issue of the Institute of Electrical and Electronics Engineers (IEEE)ComputerMagazine.
Concerns about qubit connectivity, high noise levels, the effort required to correct errors, and the scalability of quantum hardware have limited researchers' ability to deliver the solutions that future quantum computing promises.
The hybrid algorithms that the team developed employ the best features and capabilities of both classical and quantum computers to address these limitations. For example, classical computers have large memories capable of storing huge datasets - a challenge for quantum devices that have only a small number of qubits. On the other hand, quantum algorithms perform better for certain problems than classical algorithms.
To distinguish between the types of computation performed on two completely different types of hardware, the team referred to the classical and quantum stages of hybrid algorithms as central processing units (CPUs) for classical computers and quantum processing units (QPUs) for quantum computers.
The team seized on graph partitioning and clustering as examples of practical and important optimization problems that can already be solved using quantum computers: a small graph problem can be solved directly on a QPU, while larger graph problems require hybrid quantum-classical approaches.
As a problem became too large to run directly on quantum computers, the researchers used decomposition methods to break the problem down into smaller pieces that the QPU could manage - an idea they borrowed from high-performance computing and classical numerical methods.
All the pieces were then assembled into a final solution on the CPU, which not only found better parameters, but also identified the best sub-problem size to solve on a quantum computer.
Such hybrid approaches are not a silver bullet; they do not allow for quantum speed-up because using decomposition schemes limits speed as the size of the problem increases. In the next 10 years, though, expected improvements in qubits (quality, count, and connectivity), error correction, and quantum algorithms will decrease runtime and enable more advanced computation.
"In the meantime", according to Yuri Alexeev, principal project specialist in the Computational Science division, "this approach will enable researchers to use near-term quantum computers to solve applications that support the DOE mission. For example, it can be applied to find community structures in metabolic networks or a microbiome."
Additional paper authors include Ruslan Shaydulin and Ilya Safro of Clemson University, Hayato Ushijima-Mwesigwa of Fujitsu Laboratories of America, and Christian F.A. Negre and Susan M. Mniszewski of Los Alamos National Laboratory.
This research leveraged the computing resources of the Argonne Leadership Computing Facility, a DOE Office of Science User Facility; IBM quantum computers at the Oak Ridge National Laboratory IBM Q hub; and a D-Wave 2000Q quantum computer provided by the DOE National Nuclear Security Administration's Advanced Simulation and Computing Program at Los Alamos National Laboratory.