Analyzing the Schwinger Model Using Quantum Circuits

The popular image of quantum computing seems to revolve around problems in cryptography and theoretical computer science. However, quantum computation can also greatly improve our ability to simulate quantum systems. In fact, it was this application that Richard Feynman had in mind when he lamented the fact that a classical computer cannot efficiently simulate an evolution of a quantum system and proposed one of the first models for a quantum computer [1].

The aim of this post is to work through an example of how quantum circuits can be used to better understand a quantum system of interest. In particular, I will describe how one would compute the time evolution of a quantum state in the Schwinger model, a popular toy model in physics. This post follows the excellent paper “Quantum-Classical Computation of Schwinger Model Dynamics using Quantum Computers” (Klco et al., 2018) [2], although we will gloss over many details in the name of simplicity and time.

The Schwinger model, devised by American theoretical physicist Julian Schwinger (1918–1994), is a toy model for quantum chromodynamics (QCD), the branch of physics concerning quarks, gluons, and the strong nuclear force. It is called this because it exhibits properties similar to those of QCD, such as charge confinement and chiral symmetry breaking, while being substantially easier to work with, first because it lies in just one spatial dimension as opposed to three, and second because it is exactly solvable in certain cases [3].

A picture of Julian Schwinger [4]. Among many other contributions to physics, Schwinger shared the 1965 Nobel Prize in Physics with Richard Feynman and Shin’ichirō Tomonaga for his work in quantum electrodynamics (QED).

For the sake of completeness, here is the Lagrangian density for the Schwinger model, which ultimately determines the equations of motion of the system. However, you are by no means expected to find this illuminating:

The important things to know are that there are charged particles that can move on a line, and that there will be an electric field everywhere on the line (if you are familiar with electromagnetism and special relativity, you will recognize F as the Faraday tensor).

So how do we understand this model? We’re going to have to start with a fair bit of pre-processing. First off, if we want to use a finite number of qubits to simulate the system, our Hilbert space needs to be finite-dimensional, and right now it is not, because our particles can be anywhere on the line. To fix this, we discretize our space into a lattice and say that a particle can only be at one of these lattice sites. The lattice will have some parameter describing the spacing between adjacent lattice points, and in the limit as this spacing goes to zero, we should ideally recover the true physics of our system. However, the lattice would still be infinite, so we also enforce periodic boundary conditions, which means that the lattice will wrap around on itself. Any finite lattice can now be handled, since it has a finite-dimensional Hilbert space, and in the limit of large lattices, we should recover the physics of our original model.

In the case of the Schwinger model, we will also need one extra step known as the Jordan-Wigner transformation. Ultimately, this converts our system of fermions and antifermions into a system with spin-(1/2) particles, where spin-up and spin-down correspond to the (anti)fermion being present or absent in a particular way. We will not attempt to describe any of the formal procedures surrounding how to do this.

At the end, we finally get something like what you see below. Essentially, you will have a lattice with N “spatial sites”. Each spatial site will have both a fermion and an antifermion site, where the (anti)fermion can be present or absent. Then, each pair of adjacent particles will have a link between them that will carry an integer electric flux value. Furthermore, we stipulate that the electric flux at each site must be at most a particular cutoff in absolute value; this is yet another approximation for which one would approximate the real model by taking a limit of approximate cases. Physically, this is essentially cutting out states that have too much energy in the electric field. In the paper and here, we will let this cutoff be 1, so the electric flux values can only be +1, 0, or -1. In total, there will be 2N particle sites (N fermion, N antifermion, alternating) and 2N links. Shown below is a lattice with two spatial sites:

A diagram of a lattice with 2 spatial sites for the Schwinger model [2]. Note that the first spatial site consists of spots 0 and 1, while the second spatial site consists of spots 2 and 3. The system shown here is one where there are no particles, i.e. spots 0, 1, 2, and 3 are all in the “vac” state.

Here is the Hamiltonian of our lattice system. Recall that the expectation of the Hamiltonian for a particular state is that state’s energy, and that the Hamiltonian governs the time evolution of a quantum state through the Schrödinger equation:

The Hamiltonian for this lattice formulation of the Schwinger model [2]. Here, N_(fs) is the number of (anti)fermion sitse, so N_(fs) = 2N, where N is the number of spatial sites.

Furthermore, we have another condition on our system, one enforced by a 1-D version of Gauss’s Law. The condition is that the electric flux after a particle site, minus the electric flux before that site, must equal the electric charge sitting in that site. For example, if an antifermion site is full, then since an antifermion has +1 charge, the electric flux value on the subsequent link must be 1 higher than the electric flux value on the previous link. This is a huge constraint, because without it, there would be 2*3*2*3 = 36 possibilities for each spatial site, so our Hilbert space would have dimension 36^N, but once we enforce the Gauss’s Law constraint, we instead have a much lower dimension of roughly (3.25)^N [2].

Putting all of that together, here is an example of a valid state on 4 spatial sites. Try to piece together how all of the aforementioned rules are followed here:

We already made our lattice finite-dimensional, but we can do even better. If we can find operators that commute with the Hamiltonian, we can block diagonalize the Hamiltonian based on the eigenvalues of those operators. This ultimately implies via the Schrödinger equation an important fact, namely. that the expectation of any operator commuting with the Hamiltonian stays constant. In the case of this model, there happen to be several that we can point out.

First, there is the translation operator, which rotates the entire state by one spatial site:

Second, there is the charge conjugation operator, which replaces fermions and antifermions and vice versa. For this to make sense, it must also negate the electric flux values and rotate the system by half a spatial site:

Finally, there is the parity conjugation operator, which reflects the state across a line passing through a pair of opposite particle sites. For this to make sense, it must negate the electric flux values. Also notice that, unlike the charge and translation operators, there are multiple parity operators. In particular, there are N of them, corresponding to the N lines passing through each pair of opposite particle sites. In particular, the operator P_{1-} shown below is just referring to reflecting about the diameter passing through the fermion slot of the first spatial site.

This allows us to reduce the dimension of the Hilbert space we need to consider at any one moment, since we can consider different blocks separately. For example, if we consider the states that are invariant under all of the above operators (we will call those CPT-invariant states), we get an asymptotic factor reduction of about 2N, so instead of worrying about (3.25)^N possible states, we have ~(3.25)^N / (2N) states. That may not look all that much better for large N, but while we’re working on noisy quantum computers that can’t handle more than tens of qubits, every little (qu)bit helps (see what I did there?).

Now to show the setup for one concrete example. If we use two spatial sites and consider the CPT-invariant states, there will be only 5 possible states. Using one last trick that we do not explain here, we discard one of these to have just 4, which means we can use 2 qubits to represent our system. Our Hamiltonian within this sector looks as follows:

All we did above was subtract a multiple of the identity operator to get a traceless operator, which will be more convenient to handle (the constant multiple of the identity operator is superfluous in the same way that it doesn’t matter where you define your zero point for potential energy). We then write the new Hamiltonian as a linear combination of a collection of operators O_i:

The operators O_i are d²-1 generators of SU(d), where d is the dimension (d = 2^b, where b is the number of qubits). They are all possible combinations of A⊗B, where A, B are either Pauli matrices or the identity, with the exception of I⊗I; one example is shown above. The reason we do this is that exponentiating the Hamiltonian, which is what we need to do to solve the Schrödinger equation, can be reduced to exponentiating the generators. This is due to the mathematical nature of SU(d) (it is what is known as a Lie group), which we do not attempt to explain further. The reason we care about SU(d) at all is that SU(d) represents the set of all valid transformations on these qubits. This is because any such transformation must be unitary, and since a global phase does not matter, and this is equivalent in this case to considering only transformations with determinant 1.

Because the generators do not commute, you have to use various techniques (e.g. Trotterization) to compute this operator exponential. Shown below is an example of the type of circuit one can use to compute an exponential of a linear combination of several SU(4) generators.

An example of a circuit used to compute an exponential of a linear combination of some standard SU(4) generators given by Pauli matrices [2].

Putting together a bunch of these types of circuits is what ultimately gives you the circuit you want to compute the time evolution. We do not attempt to offer any more explanation than this, but one is definitely encouraged to see the original paper if one wishes to learn more about all of this. In particular, once we have our initial state and the amount of time for which we want to evolve the state, we can use the latter to compute the θ_i values that need to go into the operator exponential. Then we can create the circuit using segments like the one above, and then we can feed our initial state into the circuit. The output of the circuit will then be the time-evolved state!

One interesting thing I see when I look back on this, and I hope you see this as well, is the sheer amount of physics knowledge that has to go into this workflow. Maybe one day this process will be more mechanical, but for now, we need a ton of “pre-processing” based on our physics expertise before we can benefit from the use of quantum computing.

I hope this post gave you an overview of how quantum computation can be used to perform calculations on quantum systems much faster than we can do classically. This has the potential to lead to major advances in condensed matter physics, chemistry, medicine, and more; but I also hope that you came to enjoy this fascinating subject just a bit more in its own right as well.

Sources:

[1] Feynman, Richard (1981). “Simulating physics with computers”. https://web.archive.org/web/20190830190404/https://people.eecs.berkeley.edu/~christos/classics/Feynman.pdf

[2] Klco, Natalie et al. (2018). “Quantum-Classical Computation of Schwinger Model Dynamics using Quantum Computers”. Phys. Rev. A 98, 032331 (2018). doi:
10.1103/PhysRevA.98.032331
.

[3] Schwinger, Julian (1962). “Gauge Invariance and Mass. II”. Physical Review. Physical Review, Volume 128. 128 (5): 2425–2429. Bibcode:1962PhRv..128.2425S. doi:10.1103/PhysRev.128.2425.

[4] “Schwinger, Julian, 1918–1994”. American Institute of Physics. https://history.aip.org/phn/11601024.html

--

--