# From a state of light to state of the art: the photonic path to millions of qubits

## A blueprint for a universal, fault-tolerant, and scalable photonic quantum computer

*By Ilan Tzitrin and Ish Dhand*

What is the best computer we can make, constrained only by the known laws of physics? Vying for the title is a *universal, fault-tolerant, *and* scalable* quantum computer. A machine founded in quantum theory able to run any quantum algorithm, detect and correct any errors that can jeopardize the computation, and accommodate a vast number of qubits.

Of course, there are constraints beyond the physical — namely, the practical — that prompt us to search for the best platform for such a device. At Xanadu, we believe that photonics holds the key: basing the computer on quantum states of light is the best and fastest route toward a modular, easy-to-network, room temperature device with millions of qubits.

…basing the computer on quantum states of light is the best and fastest route toward a modular, easy-to-network, room temperature device with millions of qubits.

In our recent paper, we give a detailed blueprint for a quantum computer with all these properties. Let us show you how our proposed architecture works.

**Seeing is computing: having measurements perform the gates**

At its core, a quantum computer is a machine that creates qubits in specific states, transforms them with quantum gates, and finally measures them. So far, this sounds a lot like how your computer or phone operates: preparation, computation, and readout. So what makes a quantum computer special? The answer is complicated, but it comes down to this: quantum gates acting on quantum states can create *entanglement*, a uniquely quantum property. Entanglement melds qubits in a way that makes it impossible to describe the state of each individual qubit; it is at the heart of the coveted speedup over classical computers.

Is this the only way a quantum computer can work? No — what we’ve just described is a *gate-based *computer. In a different but equivalent approach, the *measurement-based *model, an entangled state — called a *cluster state* — is prepared at the beginning. We can visualize the cluster state like beads on a string:

The beads here are qubits, and the string represents the entanglement between them. Since the entanglement is already there, we don’t need to create any more; all that’s left to do is to measure the qubits. The measurement isn’t just a readout of the computation; it *is *the computation. A different measurement setting will result in a different gate being applied to the qubit.

For the computer to be universal, the cluster state must at least be two-dimensional — a sheet:

For the computer to be fault-tolerant, we will need a three-dimensional lattice:

Think of it this way: in a classical computer, we can shield each bit of information from errors through repetition or redundancy. For example, to protect a single bit, we can use three bits instead; if any one of them flips by accident, we can still deduce what bit we had:

Because quantum mechanics forbids us from making copies of unknown quantum states, the redundancy must be supplied in a different way. This leads us to the 3D cluster state, whose size and layout allow for what is termed *discrete-variable *(*DV) topological quantum error correction.*

So far, this quantum computer is rather abstract. The devil is in the details: what are those circles and edges *really*?* *The answers depend on the platform and the architecture. Some common choices are states of electrons, charged atoms, or single photons. At Xanadu, our approach is a little different.

## CV states: another tier of protection

In this and this blog post, we saw how light — an oscillation in electric and magnetic fields — can be prepared in *continuous-variable* (*CV*) states*.* These states can be depicted as patterns on an infinite flat surface called *phase space*, which captures the distributions of the electromagnetic field associated with the light. Heisenberg’s uncertainty principle says that we cannot precisely know the field values concurrently, limiting the shapes that we can make.

A coherent state, which describes the light emerging from a laser, is a circle in phase space (the symmetry of the circle means there is equal uncertainty about the values of the electric and magnetic fields). Compressing the circle in one direction and stretching it in another creates a *squeezed state *(an increased certainty in the magnetic field comes at the expense of increased uncertainty in the electric field, and vice versa). More complex patterns are also possible. *Gottesman-Kitaev-Preskill *(*GKP*) states, for example, are like little playing pieces arranged in a checkerboard pattern in phase space:

GKP states make excellent qubits because they have built-in error-correction capabilities. By using the breadth of phase space — the size of the checkerboard and the number of checkers — they can detect any small error, such as bumping the board accidentally and moving all the pieces a little. Then, correcting this error amounts to re-centering all the pieces in their appropriate squares.

Replacing the beads in the cluster state with GKP states of light gives us two tiers of protection: the CV or inner tier, which uses the largeness of phase space, and the DV or outer tier, which uses the abundance of qubits to detect and correct errors.

But there is a snag: GKP states are difficult to make. A promising preparation scheme we devised at Xanadu requires photon-counting detectors, which operate at very low temperatures. And even though the procedure is *heralded, *meaning we know when a GKP state emerges, it is also *probabilistic, *meaning this happens only occasionally.

One way to boost your chances of making a GKP state is through *multiplexing*: running many state generations at the same time so that at least one of them produces our state. But multiplexing poses hefty hardware requirements, which means we also need some theoretical advances in order to pull this off.

This is where our new architecture comes in.

**Best of both worlds: a hybrid cluster state**

Not all CV states are equally tricky to prepare. Take the squeezed states — phase space ellipses — that we saw earlier. While they make worse qubits than GKP states, they can be generated *deterministically*; in other words, we can guarantee their availability. In light of this, we devised a ‘best-of-both-worlds’ state to underlie our computer: a hybrid cluster state where some beads are squeezed states (blue) and others are GKP qubits (red):

Whenever a red bead isn’t produced (a no-show) a blue bead takes its place. We can’t replace all the red beads (the GKP qubits) entirely — we need them for a fault-tolerant computer — but we can still make good use of the squeezed states in the computation. The blue beads do add some quantum noise to their neighbours, but it’s no issue: we can take care of it with a specialized error-correction procedure that we present. By simulating this procedure, we discovered that about a quarter of the beads can be blue without sacrificing fault tolerance, which simplifies the state generation and reduces resource requirements significantly!

But how does one produce such a large hybrid state? We give an answer for this too, in the form of a novel modularized hardware scheme.

## A stitch in time: making and measuring the hybrid cluster

Our proposed computer consists of four blocks: the *state-preparation factory*, the *multiplexer*, the *computational* module, and the *photonic Quantum Processing Unit* (*QPU*)* *.

The state-preparation factory produces high-quality GKP qubits, albeit with low probability.

The multiplexer runs many state generations in parallel to boost the likelihood that a GKP state is produced. In the event of a no-show, the module substitutes in a squeezed state.

The computational module stitches (entangles) the CV states to make our hybrid cluster state. Unlike platforms that use single photons, in our case every stitch is deterministic.

Finally, the QPU performs the measurements required to implement any quantum algorithm and corrects for errors. These CV measurements are performed by *homodyne detectors*, which operate at room temperature and can be easily integrated on-chip in very large arrays.

These four blocks work in tandem to operate the computer. If you’re curious about the nitty-gritty, have a look at our paper, where we detail how everything fits together.

With the anatomy of our computer in mind, let’s now tell you why our architecture is so promising from a hardware perspective.

## Planar and simple: the benefits of our design

Each of the components of our computer can be manufactured using existing technology, developed and refined over decades in the fabrication of integrated photonic chips for the telecommunications industry. This practicality is reinforced by other technological advantages of our architecture. For one, it is modular: the chips in each piece of the puzzle are specialized to the given task, none given too much work to do. Another perk of this approach is that the chips are planar and moderately sized, further simplifying their design, fabrication, and integration.

Second — thanks in part to the modularity — the cryogenic requirements of our architecture are minimal. The photon-counting detectors make the state-preparation factory the only part of the computer running in low temperatures. Unlike other platforms, the actual cryostats we need are small and commercially available, with this necessity completely disappearing once room-temperature photon counters become available!

Lastly, our proposed computer is fast. The clock speed of the device is set by the homodyne detectors in the QPU, which operate efficiently and quickly — faster than the threshold detectors employed by other platforms.

All these advantages are pivotal to the colossal undertaking that is building a universal fault-tolerant quantum computer. The modularity, speed, and room temperature operation of our architecture will allow photonics to outrun other platforms in the race to the finish.

The modularity, speed, and room temperature operation of our architecture will allow photonics to outrun other platforms in the race to the finish.

This is our architecture, in a nutshell. To be sure, there is still a lot of work left to be done. But with this powerful alliance of theory and technology, and with all the strengths of the photonic platform, we at Xanadu are convinced that our design is the best candidate for a universal, fault-tolerant, scalable quantum computer with millions of qubits.