Thank you QIP audience! On Tuesday, I gave a presentation on this paper
Phys. Rev. A 95, 042306 (2017)
https://arxiv.org/abs/1612.02689
I had some great questions, but in retrospect don’t think my answers were the best. Many questions focused on how to interpret results showing that random circuits improve on purely unitary circuits. I often get this question and so tried to pre-empt it in the middle of my talk, but clearly failed to convey my point. I am still getting this question every coffee break, so let me try again. Another interesting point is how the efficiency of an optimal compiler scales with the number of qubits (see Part 2). In what follows I have to credit Andrew Doherty, Robin Blume-Kohout, Scott Aaronson and Adam Bouland, for their insights and questions. Thanks!
First, let’s recap. The setting is that we have some gate set where each gate in the set has a cost. If the gate set is universal then for any target unitary
and any
we can find some circuit
built from gates in
such that the distance between the unitaries is less than
. For the distance measure we take the diamond norm distance because it has nice composition properties. Typically, compilers come with a promise that the cost of circuit is upperbounded by some function
for some constants
and
depending on the details (see Part 2 for details).
The main result I presented was that we can find a probability distribution of circuits such that the channel
is close to the target unitary
even though the individual circuits have cost upper bounded by
. So using random circuits gets you free quadratic error suppression!
But what the heck is going on here!? Surely, each individual run of the compiler gives a particular circuit and the experimentalist know that this unitary has been performed. But this particular instance has an error no more than
, rather than
. Is it that each circuit is upper bounded by
noise, but that somehow the typical or average circuit has cost
. No! Because the theorem holds even when every unitary has exactly
error. However, typicality does resolve the mystery but only when we think about the quantum computation as a whole.
Each time we use a random compiler we get some circuit where
is a coherent noise term with small
. However, these are just subcircuits of a larger computation. Therefore, we really want to implement some large computation
.
For each subcircuit compiling is reasonable (e.g. it acts nontrivially on only a few qubits) but the whole computation acts on too many qubits to optimally compile or even compute the matrix representation. Then using random compiling we implement some sequence
with some probability
.
OK, now let’s see what happens with the coherent noise terms. For the subcircuit we have
so the whole computation we implement is
We can conjugate the noise terms through the circuits. For instance,
where
.
Since norms are unitarily invariant we still have
Repeating this conjugation process we can collect all the coherent noise terms together
Using that the noise terms are small, we can use
where
Using the triangle inequality one has
.
But this noise term could be much much smaller than this bound implies. Indeed, one would only get close to equality when the noise terms coherently add up. In some sense, our circuits must conspire to align their coherent noise terms to all point in the same direction. Conversely, one might find that the coherent noise terms cancel out, and one could possibly even have that . This would be the ideal situation. But we are talking about large unitary. Too large to compute otherwise we would have simulated the whole quantum computation. For a fixed
, we can’t say much more. But if we remember
comes from a random ensemble, we can make probabilistic arguments about its size. A key point in the paper is that we choose the probabilities such that the expectation of each random term is zero:
.
Furthermore, we are summing a series of such terms (sampled independently). A sum of independent random variables are going to convergence (via a central limit theorem) to some Gaussian distribution that is centred around the mean (which is zero). Of course, there will be some variance about the mean, but this will be rather than the
bound above that limits the tails of the distribution. But this gives us a rough intuition that
will (with high probability) have quadratically smaller size. Indeed, this is how Hastings frames the problem in his related paper arxiv1612.01011. Based on this intuition one could imagine trying to upper bound
and make the above discussion rigorous. Indeed, this might be an interesting exercise to work through. However, both Hastings and I instead tackled the problem by showing bounds on the diamond distance of the channel, which implicitly entails that the coherent errors are composing (with high probability) in an incoherent way!
More in part 2