The quantum ENIAC and Schrodinger’s glass

Language is wonderfully subtle. Two sentences can mean the same thing but leave the reader with completely different impressions. Everyone can appreciate that “The glass is half empty” and “The glass is half full” actually mean the same thing. Yet, they leave different impressions on the reader.

GlassCat

Sabine Hossenfelder’s recent Guardian column on quantum computation is titled

Quantum supremacy is coming. It won’t change the world

and it muses on the impending announcement of quantum supremacy. Soon, someone (possibly Google) is going to announce that they have used a quantum computer to perform a particular calculation faster than our fastest conventional supercomputers! Sabine’s spin is that Schrodinger’s glass is very much, half-empty. Even worse it is half empty with gone off milk.

Should you believe her?

We are in the midst of a surge of optimism, investment and frequent hype around quantum computation. Sabine’s column is a refreshing change to the worst of the hyperbole currently circulating. It has a similar, acerbic tone that she also takes in her critique of particle physics. It is fun to read and I’ve enjoyed many of her blog articles on the disconnect between experiment and theory in the high energy physics community. Of course, a balanced perspective sits somewhere between the extremes of Sabine’s pessimism and those overpromising on quantum computation.

Let’s start with the headline. If we tweak it slightly to

Quantum supremacy is coming. It won’t change the world tomorrow

then she is absolutely right. The day after you won’t notice any difference.

But she insinuates that quantum computers will never change the world, which is unwarranted and absurd given the parallels she draws with early conventional computers. She compares the coming generation of quantum computers with an early computer ENIAC that was publically announced in 1946. It did take about 35 years from ENIAC until people could have their own personal computer (PC) at home, and 60 years until most people started carrying smart phones in their pockets! This is arguably the most rapid and significant change in human history, so the comparison is a bit bizarre.

It is difficult to see why Sabine makes the comparison with ENIAC, maybe the point is that many people are promising useful quantum computers in less than 35 years. But long before you got your first PC or smartphone, computers were having a huge (though often invisible) effect on our lives. Sabine suggests that ENIAC itself had no impact itself. However, before the public announcement of ENIAC’s existence, it was used to target artillery and also to design the first atomic bomb. Around this time, simple computing devices also played a crucial role in the growing telephone network. If 2019 was the year of the quantum-ENIAC and history repeats itself, then this would be a monumental event. As many others have pointed on the comments section of her article, the analogy with ENIAC is both flawed and fails to support her main headline.

Let’s set aside historical analogies. What else does Sabine say:

Quantum supremacy will be a remarkable achievement for science. But it won’t change the world any time soon. The fact is that algorithms that can run on today’s quantum computers aren’t much use. One of the main algorithms, for example, makes the quantum computer churn out random numbers. That’s great for demonstrating quantum supremacy, but for anything else? Forget it.

Here Sabine is referring to a specific task that many companies are planning to use as their first test of how the quantum computer performs against a conventional computer. She makes a strong point here. The so-called cross-entropy benchmarking tests are just as dull and technical as the name sounds.

To me, this is a benchmarking test. When a quantum device passes the test, it tells you that technology has reached a new level of maturity. Sabine is also right that cross-entropy benchmarking does not have any practical applications and no-one would pay a billion pounds for a device that could only perform this task. However,…

What the cross-entropy benchmarking test really shows is that the technology is sufficiently mature that it is worth running other programmes on the hardware. Much more exciting is to use a quantum computer solve problems in chemistry and material science. Computing power is a definite bottle-neck point in the design of drugs and materials, so we’d like to try and solve these problems on a quantum computer. In the near future (soon after passing the cross-entropy benchmarking tests) we can start our first attempts using a heuristic algorithm called VQE (the variational quantum eigensolver). Possibly, the 50-100 qubit devices coming soon would provide commercially useful solutions in this sector.

Why am I hedging with words like “possibly” and what does “heuristic algorithm” mean. I want to give a balanced perspective, so let explain what I mean here. First, let’s check the Wikipedia page for heuristic

The objective of a heuristic is to produce a solution in a reasonable time frame that is good enough for solving the problem at hand. This solution may not be the best of all the solutions to this problem, or it may simply approximate the exact solution. But it is still valuable because finding it does not require a prohibitively long time.

This doesn’t sound too bad. Indeed, the vast majority of conventional computing software is built on a mountain of heuristics so they are pretty useful. However, often it is hard to predict the performance of a heuristic until you’ve run it on an actual device. However, we can design heuristics and test them for 20-30 qubit on our laptop using an emulation of a quantum computer. So for these small sizes, your laptop could predict exactly what the quantum computer would do when you run the heuristic. This gives you a decent idea what will happen when you run the heuristic on a 50 qubit quantum computer.

This is a design process happening now, for instance by the company RIVERLANE that I work for part-time. But there is some uncertainty in the extrapolation from 20-30 qubits to 50-100. This all tells us that there is a chance to solve some exciting chemistry problems in the near-term, but the technology might need to be more mature before we see any big payoffs.

Why use heuristics? In the near future, quantum computers won’t be perfect and as Sabine notes “The trouble is that quantum systems are exceptionally fragile”. The longer the computation, the more the noise accumulates. Consequently, we are currently limited to short, heuristic computations.

There are many quantum algorithms where we can give a mathematical proof of how the quantum computer will perform! This sounds better than using a heuristic. But these algorithms are much bigger computations than the heuristics that I mentioned above. In these larger-scale algorithms, the noise accumulation problem can be avoided by continually correcting errors. Though this quantum error correction procedure does use up some of our qubits, further adding to the required scale of the device.

Okay, so how many good quality qubits would I need so that I can say with 100% certainty that the quantum computer would change the world. Here Sabine says:

To compute something useful – say, the chemical properties of a novel substance – would take a few million qubits. But it’s not easy to scale up from 50 qubits to several million.

A few million qubit is indeed what is estimated in some of the recent literature (see for example). There are two issues with this quote. First, it ignores the heuristic methods I mentioned earlier. Second, it suggests that it would be impossible to solve this problem with fewer than a million qubits! The number of qubits needed will depend on the quality of the hardware and the design of the algorithms and error correction methods. Come up with better designs of hardware or software and the numbers go down and the technology progresses. Only a few years ago, to solve the same chemistry problem our best estimates would have been closer to a billion qubits!

In the one million qubit blueprint, many of the qubits are busy removing noise. This would allow the quantum computer to run long enough to compute reliably. Sabine does not seem aware of this point as she says that

Even when housed in far-flung shelters and accessed through the cloud, it is not clear that a million qubit machine would ever remain stable for long enough to function.

She is missing the point that a million qubits are budgeted for precisely to provide a guarantee that it will remain stable for long enough. On the other hand, for the smaller 50-100 qubit computer, the stability issue is more nuanced because everything is heuristic and without error correction.

The technological and algorithmic obstacles are tough and challenging. But as we work hard on these problems, we see two things happen: hardware developers release better devices with more qubits and less noise; theorists come up with better designs that need fewer qubits. This is happening now, we are not stuck on any insurmountable problems.

Towards the conclusion of Sabine’s column she ponders

Researchers need to move on. They must take a good, hard look at the quantum computers they are building and ask how will they make them useful.

Again, Sabine states something factually correct but in a glass-half-empty tone. However, I am happy to report that this is exactly what we are doing. I have just got home from QEC2019, a 1-week conference in London (more info here) where we focused on how to make the quantum error correction components more efficient. The conference had theorists reporting new techniques and ideas for how to perform error correction. It also had experimentalists describing the latest hardware progress. Most people at QEC are pretty realistic and can see that the glass definitely contains 50% water / 50% air and the tap is still running.

2 thoughts on “The quantum ENIAC and Schrodinger’s glass

Leave a comment