Why quantum volume is vital for plotting the path to quantum advantage
Measuring the computational ability of quantum computers is—as is anything involving quantum systems—a complex problem. Counting the number of qubits in a quantum computer to determine computational power is too simplistic to be functionally useful—differences in how individual qubits are connected, how the qubits themselves are designed, and environmental factors make this type of comparison inequitable.
For example, D-Wave is planning to launch a 5,000-qubit system for cloud-based access in mid-2020. Google, for contrast, has a 72-qubit quantum computer called “Bristlecone” and IBM’s Q System One is a 20-qubit design. Differences in how these qubits are designed and connected make cross-vendor comparisons unreliable—while D-Wave’s upcoming 5,000-qubit system will undoubtedly be more capable than its current-generation 2,000-qubit system, it is not necessarily better than IBM’s designs, or Google’s prototypes. Further, D-Wave’s design is a quantum annealer, useful for a single type of calculation called “quadratic unconstrained binary optimization (QUBO).”
Read more here: