Are quantum computers more powerful than classical computers and the stabiliser rank story. (part 1)

Of course, quantum computers are more powerful than classical computers, but how do you prove it! Until recently there was not much known except for some oracle separation results. In 2017, we got the already infamous Bravyi-Gosset-Koenig result that proved quantum computers can solve some problems in constant time that for a classical computer require logarithmic time. But we really want to know that quantum computers offer an exponential speedup such as Shor’s algorithm suggests.

Asking for a proof of an exponential speedup is a lot to ask for! We would need to show that all possible classical algorithms are incapable of efficiently simulating a quantum computer. This is really hard. But if we have a concrete family of classical algorithms, then surely we should try to understand if they could possibly offer an efficient simulation!

This brings us to stabiliser rank and related simulation methods. Let’s define some terms. Given a state \Psi we define the stabiliser rank \chi(\Psi) to be the smallest number of terms needed to write \Psi as a sum of stabiliser states. More formally,

Definition of stabiliser rank:
Suppose \psi is a pure n -qubit state. The exact stabilizer rank \chi(\psi) is the smallest integer k such that \psi can be written as

\vert  \psi \rangle  = \sum_{\alpha=1}^{k} c_\alpha \vert \phi_\alpha \rangle

for some n -qubit stabilizer states \phi_\alpha and some complex coefficients c_\alpha .

These decompositions were first studied by Garcia- Markov-Cross and applications to simulation were first explored in more depth by Bravyi-Smith-Smolin. The Gottesman-Knill theorem allows us to efficiently simulate a Clifford circuit acting on a stabiliser state. This can be extended to simulation of an arbitrary state \Psi  undergoing a Clifford circuit with runtime proportional to \chi(\Psi) (assuming we know the decomposition). We also know that a universal quantum computation can be gadgetized into a Clifford circuit with a supply of single qubit magic states \Psi = \psi^{\otimes t} where t scales polynomially with the number of gates in the original quantum computation.

So if quantum computers are more powerful than classical computers, then we expect \chi( \psi^{\otimes t}) to scale exponentially with t . If \chi( \psi^{\otimes t}) instead scales polynomially with t then BQP=BPP (you can classically efficiently simulate quantum computers). But also the classical simulation method allows for postselection, so it would actually entail postBQP=BPP, which entails NP=P.

Surely, this is a simple statement that the quantum community should be able to prove. Surely, stabiliser rank simulators are not going to show BQP=BPP. Surely, surely we can prove the following conjecture:

The stabiliser rank conjecture:
For every nonstabiliser state \psi , there exists a \alpha >0 such that

e^{\alpha t} \leq \chi(  \psi^{\otimes t} )

holds for all integer t .

Clearly, one has that \chi( \psi^{\otimes t}) \leq \chi( \psi)^t , but the problem is that \chi( \psi^{\otimes t}) \ll \chi( \psi)^t and that we have almost no lower bounds on the stabiliser rank. Consider, a 1-qubit non-stabiliser state

\vert \psi \rangle = \alpha \vert 0 \rangle + \beta \vert 1 \rangle

But when we consider 2 copies of the state we have

\vert \psi^{\otimes 2} \rangle = \alpha^2 \vert 0 0 \rangle + \alpha \beta \sqrt{2} \vert \Psi^+ \rangle + \beta^2 \vert 1 1 \rangle


\vert \Psi^+ \rangle = ( \vert 0 1 \rangle + \vert 1 0 \rangle ) / \sqrt{2}

This is a decomposition with 3 stabiliser terms and so

\vert \psi^{\otimes 2} \rangle \leq 3

for all single qubit \psi . Ok, what about for more copies? Amongst many results in our recent Bravyi-Browne-Calpin-Campbell-Gosset-Howard (herein BBCCGH) is a proof that

\vert \psi^{\otimes t} \rangle \leq t+1 for t \leq 5

(see Sec V.A of arXiv_V1). The stabiliser rank grows linearly! Not exponentially, but linearly! OK, this scaling holds up until only five copies, but I find this remarkable. Incidentally, the proof has a nice connection to the fact that the Clifford group forms a 3-design. Beyond five copies, the numerics look exponential, but the numerics are heuristic, inconclusive and only go slightly further than five copies!

Do we have any lower bounds on stabiliser rank! Deep in Bravyi-Smith-Smolin (Appendix C) is tucked away a neat result showing that for the \vert T \rangle state (an especially important single-qubit state) we know the stabiliser rank must scale at least as fast as \Omega( \sqrt{t} ) for t copies. We are hoping for an exponential lower bound and yet the best we find is a square root. Not even a linear lower bound! I find this shocking. And I assure you, it is not for lack of many smart people spending many, many hours trying to get a better lower bound. In Sec V.D. of BBCCGH, we (I say “we” but this part was all Sergey) show an exponential lower bound but with an additional (and quite strong) constraint on the allowed stabiliser states in the decomposition. The proof technique is really interesting and hints at routes forward to solving the full problem.

There are also a whole bunch of cool open problems revolving around the approximate stabiliser rank (introduced by Bravyi-Gosset) and which BBCCGH show is connected to a quantity called “the extent” (see Def 3 of BBCCGH). I want to say more about these open problems but this post has already gone on too long. More to follow in part 2!

Joint PhD position between Sheffield and Oxford

Circuit compilers for near-term quantum computers

Summary: The number of elementary gates in a quantum computation determines the runtime of the quantum computer. It is clearly advantageous to have faster computations that use fewer gates and “circuit compilation” is the art of optimizing and automating this process. For near-term quantum computers (without error correction) effective compilation is especially important because these devices will be noisy and this imposes a practical limit on the number of gates before an error becomes inevitable. Therefore, compilation protocols and software are crucial to whether we will be able to demonstrate a quantum advantage before full-blown error-corrected devices are available. This PhD project will develop compilation methods exploring random compilers and simulation of fermionic and bosonic interacting systems. This is a joint project between the University of Sheffield and Oxford University.

The studentship is fully funded for UK students with funding provided by NQIT, the UK national hub for quantum computing and Networked Quantum Information Technologies. As such, the student will have the opportunity to collaborate and contribute towards the UK’s largest quantum computing effort.

The project is a joint collaboration between the groups of Simon Benjamin in Oxford, and Earl Campbell in Sheffield. The project and student could be based at either university, according to the preference of the successful candidate. The studentship will be held at the chosen university and the student will be registered for a doctoral degree of the chosen university.

We are looking for an enthusiastic student with a physics, mathematics or computer science degree. The award should be a minimum of a UK upper second class honours degree (2:1). The student should also have some of the following desirable skills: a good undergraduate-level understanding of quantum mechanics, a strong mathematical background and/or experience programming and running numerical simulations (e.g. in C).

Informal inquiries can be made to: or
Formal applications can be made either to Sheffield via
or to Oxford University

QEC ’19 call for submissions.

Hi folks,

Here is a message from Dan about QEC ’19. It is coming to the UK!!! Very excited and pleased to be helping out with chairing the programme.


Dear colleagues,

I’m pleased to announce that the 5th International Conference on Quantum Error Correction (QEC’19) is now open for the submission of contributed talks and posters.

QEC ’19 will be taking place at Senate House, London between 29th July and 2nd August 2019. It aims to bring together experts from academia and industry to discuss theoretical, experimental and technological research towards robust quantum computation. Topics will include control, error correction and fault tolerance, and their interface with physics, computer science and technology research.

Please see our web-page for further information about the conference, including the list of invited speakers, and information on how to submit your one-page abstract:

The deadline for submissions is April 1st.

The scope of QEC is broad. Here is a non-exhaustive list of topics covered by the conference: Theoretical and experimental research in: Quantum error correcting codes; error correction and decoders; architecture and design of fault tolerant quantum computers; error mitigation in NISQ computations, error mitigation in quantum annealing; reliable quantum control, dynamical decoupling; Error quantification, tomography and benchmarking; Quantum repeaters; Active or self-correcting quantum memory. Applications of quantum error correction in Physics, CS and other disciplines; Classical error correction relevant to the QEC community; Compilers for fault tolerant gate sets; Error correction and mitigation in other Quantum Technologies (sensing / cryptography / etc.); QEC-inspired quantum computing theory.

If you have a query about whether your contribution might be in scope, please contact me directly.

Key dates

Abstract submission deadline: 1 April 2019
Registration opens: TBC (late February)
Early registration deadline: 10 June 2019
Registration deadline: 15 July 2019

A conference poster is available at the following link:

I would be grateful if you would print this poster and display it in a place where interested participants may see it.

I look forward to your submission to QEC’19 and to seeing you in London in the summer.


Dan Browne – Conference Chair

The folly of fidelity and how I learned to love randomness

If a quantum device would ideally prepare a state \psi , but instead we prepare some other approximation state \rho . Then how should the error be quantified?

For many people, the gold standard is to use the fidelity. This is defined as

F =   \langle \psi \vert \rho \vert \psi \rangle

One of the main goals of this blog post is to explain why the fidelity is a terrible, terrible error metric. You should be using the “trace norm distance”. I will not appeal to mathematical beauty — though trace norm wins on these grounds also — but simply to what is experimentally and observationally meaningful. What do I mean by this? We never observe quantum states. We observe measurement results. Ultimately, we care that the observed statistics of experiments is a good approximation of the exact, target statistics. In particular, every measurement outcome is governed by a projector \Pi and if we have two quantum states \psi and \rho then the difference in outcome probabilities is

\epsilon_\Pi = | \mathrm{Tr} [ \Pi \rho ] - \langle \psi \vert \Pi \vert \psi \rangle |

Error in outcome probabilities is the only physically meaningful kind of error. Anything else is an abstraction. The above error depends on the measurement we perform, so to get a single number we need to consider either the average or maximum over all possible projectors.

The second goal of this post is to convince you that randomness is really, super useful! This will link into the fidelity vs trace norm story. Several of my papers use randomness in a subtle way to outperform deterministic protocols (see here, here and most recently here). I’ve also written another blog posts on the subject (here). Nevertheless, I still find many people are wary of randomness. Consider a protocol for approximating \psi that is probabilistic and prepares states \phi_k with probability p_k . What we will see is that the error (when correctly quantified) of probabilistic protocols can be considerably less than any of the pure states \phi_k in this ensemble. Strange but true. Remember, I only care about measurement probabilities. And you should only care about measurement probabilities! Given a random protocol, the probability of a certain outcome will be
\sum_k  p_k \langle \phi_k  \vert \Pi \vert \phi_k  \rangle = \mathrm{Tr}[ \Pi \rho ]
\rho = \sum_k  p_k \vert \phi_k  \rangle \langle \phi_k  \vert
so our measure of error must depend on the averaged density matrix. Even though we might know after the event which pure state we prepared, the measurement statistics are entirely governed by the density matrix that averages over the ensemble of pure states. When I toss a coin, I know after the event with probability 100% whether it is heads or tails, but this does not prevent it from being an excellent 50/50 (pseudo) random number generator. Say I want to build a random number generator with a 25/75 split of different outcomes. I could toss a coin once: if I get heads then I output heads; otherwise I toss a second time and output the second toss result. This clearly does the job. We do not hear people object, “ah ha but after the first toss, it is now more likely to give tails so you algorithm is broken”. Similarly, quantum computers are random number generators and are allowed a “first coin toss” that determines what they do next.

Let’s thinking about the simplest possible example; a single-qubit. Imagine the ideal, or target state, is \vert \psi \rangle = \vert 0 \rangle . We consider states of the form \vert \theta \rangle = \cos( \theta / 2 )  \vert 0 \rangle + \sin( \theta / 2 )  \vert 1 \rangle . The states \vert \theta \rangle and \vert -\theta \rangle both have the same fidelity with respect to the target state. So too does the mixed state
\rho(\theta) = \frac{1}{2} ( \vert \theta \rangle \langle \theta \vert + \vert -\theta \rangle \langle -\theta \vert  ) .
Performing a measurement in the X-Z plane, we get the following measurement statistics.

We have three different lines for three states that all have the same fidelity, but the measurements errors behave very differently. Two things jump out.

Firstly, the fidelity is not a reliable measure of measurement error. For the pure states, in the worst case, the measurement error is 20 times higher than the error as quantified by fidelity. Actually, for almost all measurement angles the fidelity is considerably less than the measurement error for the pure states. Just about the only thing fidelity tells you is that there is one measurement (e.g. with \varphi=0  ) such that the fidelity tells you the measurement error. But it is not generally the case that the quantum computer will perform precisely this measurement. This closes the case on fidelity.

Secondly, the plot highlights how the mixed state performs considerably better than the pure states with the same fidelity! Clearly, we should choose the preparation of the mixed state over either of the pure states. For each pure state, there are very small windows where the measurement error is less than for the mixed state. But when you perform a quantum computation you will not have prior access to a plot like this to help you decide what states to prepare. Rather, you need to design your computation to provide a promise that if your error is \delta  (correctly quantified) then all probabilities should be correct up-to and not exceeding \delta  . The mixed state gives the best promise of this kind.

Now we can see that the “right” metric for measuring errors should be something like:
\delta =   \mathrm{max}_{\Pi} | \mathrm{Tr} [ \Pi \rho ] - \langle \psi \vert \Pi \vert \psi \rangle |
Nothing could be more grounded in experiments. This is precisely the trace norm (also called the 1-norm) measure of error. Though it needs a bit of manipulation to gets into it’s most recognisable form. One might initially worry that the optimisation over projections is tricky to compute, but it is not! We rearrange as follows

\delta =   \mathrm{max}_{\Pi} | \mathrm{Tr} [ \Pi (\rho- \vert \psi \rangle\langle \psi \vert) ]  |


M =   \rho- \vert \psi \rangle\langle \psi \vert = \sum_j \lambda_j  \vert \Phi_j \rangle\langle \Phi_j  \vert

where we have given the eigenvalue/vector decomposition of this operator. The eigenvalues will be real numbers because M  is Hermitian.

It takes a little bit of thinking to realise that the maximum is achieved by a projection onto the subspace with positive eigenvalues
\delta =   \mathrm{max}_{\Pi} | \mathrm{Tr} [ \Pi M ] = \sum_{j : \lambda_j > 0} \lambda_j
Because, M  is traceless, the eigenvalues sum to zero and we can simplify this as follows
\delta =  \frac{1}{2}\sum_{j }  | \lambda_j |
If we calculated this \delta   quantity and added it to the plots above, we would indeed see that it gives a strict upper bound on the measurement error for all measurements.

We have just been talking about measurement errors, but have actually derived the trace norm error. Let us connect this to the usual mathematical way of introducing the trace norm. For an operator M   , we denote || M ||_1   or || M ||_{\mathrm{tr}}   for the trace norm, which simply means take the absoluate sum of the eigenvalues of M   . Given two states, the trace norm distance is defined as
\frac{1}{2} || \rho -\sigma ||_{\mathrm{tr}}
Hopefully, the reader can see that this is exactly what we have found above. In other words, we have the equivalence
\frac{1}{2} || \rho -\sigma ||_{\mathrm{tr}} = \mathrm{max}_{\Pi} | \mathrm{Tr} [ \Pi ( \rho -\sigma) ]

Note that we need the density matrix to evaluate this error metric. If we have a simulation that works with pure states, and we perform statistic sampling over some probability distribution to find the average trace norm error
\sum_{k} p_k \frac{1}{2} \big|\big| \vert \phi_k \rangle  \langle \phi_k \vert  - \vert \psi \rangle  \langle \psi \vert  \big|\big|_{\mathrm{tr}}
then this will massively over estimated the true error of the density matrix
\rho = \sum_{k} p_k \vert \phi_k \rangle  \langle \phi_k \vert

I hope this convinces you all to use the trace norm in future error quantification.

I want to finish by tying this back into my recent preprint:
That preprint looks not at state preparation but the synthesis of a unitary. There I make use of the diamond norm distance for quantum channels. The reason for using this is essentially the same as above. If we have a channel that has error \delta   in diamond norm distance, it ensures that for any state preparation routine the final state will have no more than error \delta   in trace norm distance. And therefore, it correctly upper bounds the error in measurement statistics.

PhD applicant F.A.Q.

Many emails I get regarding PhD projects under my supervision often ask similar questions. To reduce the number of emails sent/received, I will address below the most common questions.

  1. When should I apply? Applications are best submitted in either late December or early January. Funding decisions regarding university studenships are typically made around February time so you want to apply before then. The university accepts applications all year round and sometimes funding will become available at different times of year, but your chances are significantly reduced if you apply later than mid-January.
  2. How do I apply? All applications must be made by completeing the online application form, here. For this you will need: a CV, a transcript of your exam results and two referees who will provide a letter of recommendation. You are very welcome to also informally email me, to introduce yourself or ask further questions about the project. But please be aware that an email is not considered a formal application.
  3. When is the start date? A successful January applicant would typically start in September the same year.
  4. Do you have funding for a PhD studentship? The department has a limited number of EPSRC funded studentships. If you have read an advert for a PhD position in my group, then (unless otherwise stated) it will be for an EPSRC funded studentships. These studentships are allocated competitively. This means that allocation of studentships is partly decided depending on the track record of the student applying. Therefore, I have to first shortlist and interview applicants, and then decide whether to put them forward for a studentship award. UK citizens are eligible for EPSRC studentships. EU (but non-UK) citizens are also eligible and encouraged but currently (as of 2018) only 10 percentage of EPSRC studentships can be allocated to EU (and non-UK) citizens.
  5. What is the Sheffield University Prize scholarship? This is a more prestigious award that is very competitve and you can read more by following this link. It has the advantage that all nationalities are eligible. However, since it is extremely competitve, it is only worth applying if you are an exceptional candidate. For instance, if you scored top of the year in your exams and already have a published piece of research.
  6. The job advert asks for a degree in physics, computer science or mathematics but I only have one of these degrees, will I struggle? I am only expecting applicants to have one degree. Quantum computing and information is interdisplinary topic and so draws on tools and techniques from different degree courses. This makes it an exciting and rewarding area of research that makes you more aware of the connections between different disciplines. Indeed, I now consider the division of universities into departments as a historical accidient rather than a reflection of a fundemental division. Nevertheless, undergraduates often feel nervous about choosing an interdisciplinary research topic. As a supportive tale, I often point out that I started my PhD in quantum computing with an undergraduate in Physics and Philosophy, and so I only had half a relevant degree. However, what you lack in experience must be replaced with enthusiasm. I expect PhD students to be fascinated by different aspects of physics, computer science and mathematics.
  7. Can I do my PhD remotely? An important part of a PhD is becoming part of a research group and interacting with your peers on a daily basis. As such, I expect students to be based in Sheffield and only permit remote supervision in exceptional circumstances. I might be more flexible on this point if you have your own external funding.
  8. Can you recommend any other PhD programmes? In the UK, I highly recommend the quantum themed CDTs (centers for doctoral training) run by University College London, Imperial College and Bristol. They are all have 1 extra year of training at the beginning, after which you choose a 3 year project (so 4 years total). An important point is that after the first year these CDTs (usually) proivde the option of doing your research project at another university, giving you a lot of options. There are also a lot of great places internationally, too many to list.

QIP 2018 talks

You can find all the QIP 2018 talks here:

I gave a talk on random compiling that you can watch here:
The relevant paper is published here in Phys. Rev A. After giving the talk, I wrote a pair of blog posts (post 1 and post 2) to address the most common questions that arose during QIP coffee breaks.