Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
When can Quantum Annealing win? (googleresearch.blogspot.com)
71 points by runesoerensen on Dec 8, 2015 | hide | past | favorite | 11 comments


> from the paper http://arxiv.org/abs/1512.02206 : we report the total computational effort of QMC in standard units of time per single core.

That's an interesting way to compare quantum and classical computer "substrates" on the same footing: running time.

> Based on the results presented here, one cannot claim a quantum speedup [...] as this would require that the quantum processor [to] outperform the best known classical algorithm. This is not the case... because a variety of heuristic classical algorithms can solve most instances of Chimera structured problems much faster than SA, QMC, and the D-Wave 2X

So no quantum revolution yet, but potentially for larger problems and when the quantum "substrate" becomes more expressive, we'll see classical heuristics fail and quantum annealing win.


I'll add that I could cram a bunch of ASIC's with accelerators for annealing problems in a machine the size of D-Wave. You'd see magnitude-higher speedup without anything quantum at all. I'm not sure if they looked inside of one to see exactly what's running in it but I'd rule out classical acceleration methods first.


I base my speculation on this comparison: http://bitcoin.stackexchange.com/questions/36412/what-is-the... between GPU and ASIC bitcoin mining and on https://en.bitcoin.it/wiki/Non-specialized_hardware_comparis... for GPU and CPU comparison.

From this the difference between between GPU and ASIC solution is about 10^4.

Google tested against Intel(R) Xeon(R) CPU E5-1650 single core, so it would be roughly 10^3 slower than GPU.

So one could say that ASIC solution for bitcoin mining is about 10^7 times faster than single core CPU solution.

So in this sense parents claim checks out.

Now I do not know if it is possible to generalize these finding to the problem in question.


It looks like the actual difference is 10^6.


The paper mentioned they talked with folks doing special purpose FPGA's, and that those teams seem to disagree with your assessment ;-)

(at least, that's my read, maybe i misunderstood)


The one thing that appears to contradict my claim is a 10^8 speedup. That's a ridiculously huge speed-up that even custom hardware won't achieve. I'm not sure if it's an error or not, though. If it was right, then they either have a groundbreaking algorithm with acceleration or a quantum annealing machine.


They say: Based on the results presented here, one cannot claim a quantum speedup for D-Wave 2X, as this would require that the quantum processor in question outperforms the best known classical algorithm. This is not the case for the weak-strong cluster networks. This is because a va- riety of heuristic classical algorithms can solve most in- stances of Chimera structured problems much faster than SA, QMC, and the D-Wave 2X.

But that in my opinion does not mean that it is not a quantum annealing computer, just that it does not bring the speed-up we are looking for.


That stood out for me too. It might mean something or might not. D-Wave just acts dodgier than most companies so Im a bit more skeptical.


Since I didn't know the naming scheme for D-Wave products, they are testing the "D-Wave 2x" which is the latest generation, 1,000 qubit quantum annealer, from the press release when it was launched (Which I just noticed was also linked in the Google post..):

    In addition to scaling beyond 1000 qubits, the new
    system incorporates other major technological and
    scientific advancements. These include an
    operating temperature below 15 millikelvin, near
    absolute zero and 180 times colder than interstellar
    space. With over 128,000 Josephson tunnel
    junctions, the new processors are believed to be
    the most complex superconductor integrated circuits
    ever successfully used in production systems.
    Increased control circuitry precision and a 50% 
    reduction in noise also contribute to faster
    performance and enhanced reliability. 
http://www.dwavesys.com/blog/2015/08/announcing-d-wave-2x-qu...


Curious what effect a doubling in the DWave machine's "qubits" would have on this factor and how soon that's likely to be achieved. Does the complexity / cost of building such a machine scale linearly with the number of qubits involved?

Are there problems that are reducible to quantum annealing that become attractive with such a performance improvement?


I guess it is good time to recall this news: http://phys.org/news/2014-11-largest-factored-quantum-device...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: