> from the paper http://arxiv.org/abs/1512.02206 : we report the total computational effort of QMC in standard units of time per single core.
That's an interesting way to compare quantum and classical computer "substrates" on the same footing: running time.
> Based on the results presented here, one cannot claim a quantum speedup [...]
as this would require that the quantum processor [to] outperform the best known
classical algorithm. This is not the case... because a variety of heuristic
classical algorithms can solve most instances of Chimera structured problems
much faster than SA, QMC, and the D-Wave 2X
So no quantum revolution yet, but potentially for larger problems and when the quantum "substrate" becomes more expressive, we'll see classical heuristics fail and quantum annealing win.
I'll add that I could cram a bunch of ASIC's with accelerators for annealing problems in a machine the size of D-Wave. You'd see magnitude-higher speedup without anything quantum at all. I'm not sure if they looked inside of one to see exactly what's running in it but I'd rule out classical acceleration methods first.
The one thing that appears to contradict my claim is a 10^8 speedup. That's a ridiculously huge speed-up that even custom hardware won't achieve. I'm not sure if it's an error or not, though. If it was right, then they either have a groundbreaking algorithm with acceleration or a quantum annealing machine.
They say: Based on the results presented here, one cannot claim
a quantum speedup for D-Wave 2X, as this would require
that the quantum processor in question outperforms the
best known classical algorithm. This is not the case for
the weak-strong cluster networks. This is because a va-
riety of heuristic classical algorithms can solve most in-
stances of Chimera structured problems much faster than
SA, QMC, and the D-Wave 2X.
But that in my opinion does not mean that it is not a quantum annealing computer, just that it does not bring the speed-up we are looking for.
Since I didn't know the naming scheme for D-Wave products, they are testing the "D-Wave 2x" which is the latest generation, 1,000 qubit quantum annealer, from the press release when it was launched (Which I just noticed was also linked in the Google post..):
In addition to scaling beyond 1000 qubits, the new
system incorporates other major technological and
scientific advancements. These include an
operating temperature below 15 millikelvin, near
absolute zero and 180 times colder than interstellar
space. With over 128,000 Josephson tunnel
junctions, the new processors are believed to be
the most complex superconductor integrated circuits
ever successfully used in production systems.
Increased control circuitry precision and a 50%
reduction in noise also contribute to faster
performance and enhanced reliability.
Curious what effect a doubling in the DWave machine's "qubits" would have on this factor and how soon that's likely to be achieved. Does the complexity / cost of building such a machine scale linearly with the number of qubits involved?
Are there problems that are reducible to quantum annealing that become attractive with such a performance improvement?
That's an interesting way to compare quantum and classical computer "substrates" on the same footing: running time.
> Based on the results presented here, one cannot claim a quantum speedup [...] as this would require that the quantum processor [to] outperform the best known classical algorithm. This is not the case... because a variety of heuristic classical algorithms can solve most instances of Chimera structured problems much faster than SA, QMC, and the D-Wave 2X
So no quantum revolution yet, but potentially for larger problems and when the quantum "substrate" becomes more expressive, we'll see classical heuristics fail and quantum annealing win.