- This particular problem was artificially constructed with the specific goal of demonstrating quantum supremacy. It has little to none practical use. Essentially they looked what the easiest thing is for current quantum computers to do, and formulated a problem based on that
- Because of the above point, this does not mean that the quantum computer can do anything useful (definitely not factor numbers). It is nowhere near high fidelity enough to run highly error corrected algorithms (which Shor's and Grover's algorithms demand)
Since this is a mostly theoretical exercise to begin with, it's weird that they mention the computational cost of the classical method in terms of real computer times on supercomputers, and almost don't mention the theoretical bounds. The practical cost doesn't matter much as far as demonstrating quantum supremacy is concerned.
As far as I know, the key to claiming true quantum supremacy in this case, is actually the proof of the theoretical complexity of the classical algorithm. The quantum computer is obviously efficient at solving the problem, considering that the problem was constructed with what a QC is good at in mind. And the hardness of any classical algorithms had been somewhat demonstrated already, by Aaronson and Arkhipov in 2011. They managed to show that if there is a polynomial time classical algorithm capable of solving this sampling problem, then the polynomial hierarchy would collapse, which is seen as extremely unlikely (on the same level as showing P=NP).
PS: Aaronson recently gave a 3-part lecture at ETH as part of the annual Paul Bernays lectures. Links to recordings and PPTs here: https://www.scottaaronson.com/blog/?p=4301. Part 3 is specifically about this topic, and gives a good high level overview of the current state.
- Because of the above point, this does not mean that the quantum computer can do anything useful (definitely not factor numbers). It is nowhere near high fidelity enough to run highly error corrected algorithms (which Shor's and Grover's algorithms demand)
Since this is a mostly theoretical exercise to begin with, it's weird that they mention the computational cost of the classical method in terms of real computer times on supercomputers, and almost don't mention the theoretical bounds. The practical cost doesn't matter much as far as demonstrating quantum supremacy is concerned.
As far as I know, the key to claiming true quantum supremacy in this case, is actually the proof of the theoretical complexity of the classical algorithm. The quantum computer is obviously efficient at solving the problem, considering that the problem was constructed with what a QC is good at in mind. And the hardness of any classical algorithms had been somewhat demonstrated already, by Aaronson and Arkhipov in 2011. They managed to show that if there is a polynomial time classical algorithm capable of solving this sampling problem, then the polynomial hierarchy would collapse, which is seen as extremely unlikely (on the same level as showing P=NP).
PS: Aaronson recently gave a 3-part lecture at ETH as part of the annual Paul Bernays lectures. Links to recordings and PPTs here: https://www.scottaaronson.com/blog/?p=4301. Part 3 is specifically about this topic, and gives a good high level overview of the current state.
Edit: link to more info on the Aaronson and Arkhipov result, including link to the original paper: https://gilkalai.wordpress.com/2010/11/17/aaronson-and-arkhi...