Here I will follow on from **my previous post**. Circuit compilers usually come with a promise that the cost is upper bounded by some function

with constants depending on the details. In my talk, I asserted that: (1) for qubit single gate sets, the Solovay-Kitaev algorithm achieved ; and (2) modern tools can efficiently solve the single-qubit problem using Clifford+T gates and achieve .

I further speculated that for multiqubit compiling problems, the optimal scaling (and hence value) could be much worse! But a few results have been pointed out to me. Firstly, the **famous Dawson-Nielsen paper** actually shows that is achieved by Solovay Kitaev for all compiling problems, and so is independent of the number of qubits. Secondly, this **neat paper by Harrow-Recht-Chuang** showed that an optimal compiler will always achieve scaling independent of the number of qubits.

However, the Harrow-Recht-Chuang result is non-constructive, so it doesn’t give us a compiler. It just says an optimal one exists. Also, it doesn’t say anything about how the classical runtime scales with the number of qubits. Therefore, if we restrict to compilers with polynomial runtime (polynomial in ), all we can say is that the optimal scaling sits somewhere in the interval . Finding where the optimal rests in this interval (and writing the software for a few qubit compiler) is clearly one of the most important open problems in the field.

The above discussion seems to entail that multiqubit compiling isn’t very different from single qubit compiling in terms of overhead scaling. However, we have a constant prefactor , which is constant with respect to but could increase with the number of qubits. Indeed, we know that there are classical circuits that need an exponential number of gates, which tells us that the prefactor for quantum compiler should also scale exponentially with qubit number.