The Quantum Series

Ignacio Cattivelli
Urudata Software
Published in
13 min readNov 28, 2018

--

Part I — Laying the path towards quantum computing

In this series of posts, we’ll introduce the reader into the world of quantum computing. Starting with the very basics, we’ll build a mathematical framework that allows us to represent classic and quantum computations and take it from there in order to understand quantum phenomena and how it can be used in our benefit.

Readers of this series need to:

  • Be curious about quantum computing.
  • Have an open mind and be ready to embrace the math, whatever the logical interpretations behind it.
  • Look into the world five to ten years from now and how it could be shaped by this ground-breaking technology.

The concepts explored in these articles will be easier to understand for readers who have a background in information technology, namely how bits and logic gates work, and some linear algebra.

In this post, we’ll begin by walking through the history of quantum mechanics and its main postulates. This will allow us to have an intuition behind the properties that we’ll discuss once we dive into quantum computing.

After providing the background behind quantum mechanics, we’ll give a mathematical model based on linear algebra that will allow us to represent bits and logic gates as vectors and matrices. This will come very handy once we start to work with quantum bits.

Quantum physics background

Back in 1885, experimental evidence showed that certain gases, when heated, produced an emission spectrum that wasn’t continuous. In fact, these could be mapped to natural numbers and were unique for each element.

Figure 1 — Emission spectrum of different gases

In other words, the emission spectrum showed “quantized” energy. This is the origin of quantum physics, which proposes that subatomic particles (in this case electrons) have a discrete set of states in which one can be at a time and, if given enough energy, it can “jump” from one state to another higher energy state instantaneously. Once in the higher state, it could jump back to the lower-energy state releasing energy in the form of photons (visible light), which is what we see in its emission spectrum.

This was proposed by Niels Bohr in his famous atomic model based upon orbitals.

Figure 2 — Bohr’s atomic model

This model seems counter-intuitive in the macroscopic world. If, for instance, we were to translate it to planetary orbits, it would be like saying that the Earth suddenly jumps to the orbit of Mars, without traversing the distance between both orbits. It is as if Earth suddenly leaped from one position to the other. This is what is referred to as a “quantum leap”.

The double slit experiment

A known experiment in the study of waves is the so called “double slit experiment”, where a wave (say of water) passes through two separated slits forming two in-phase sub-waves which interact with each other forming what is known as an interference pattern. That is, at some points the crests or valleys add up increasing the amplitude (and hence the size of the wave) producing constructive interference and at some points they cancel out producing destructive interference.

Figure 3 — Interference pattern of water

This holds true for other types of waves too, such as sound waves or electromagnetic waves. In the case of sound waves, this is the well-known phenomenon that happens in places with poor acoustics, where at some spots music plays very loud (constructive interference) while at others you can’t hear it at all (destructive interference).

When this experiment was first tested on electromagnetic waves (light), results ensued. An interference pattern emerged that was consistent to that of soundwaves and water waves. The problem was that, at the time, light was believed to behave as discrete energy particles, without mass, named photons, which granted Einstein his Nobel Prize for his theory of the photoelectric effect.

Figure 4 — Interference pattern of light

Hence, what is known as the wave-particle duality arose. In this case, light was believed to behave both as particles (photons) and an electromagnetic wave. Few years later, physicist Louis de Broglie generalized this theory by stating that all matter has wave properties, not only photons. In objects of higher masses, such as the reader, this wave is insignificant, but in the case of subatomic particles, it plays an exceptional role when describing its behavior.

When, in 1927, the double slit experiment was carried out on electrons, de Broglie’s theory was experimentally confirmed, since these particles with mass behaved exactly as he expected: a wave presenting interference patterns. Electrons accumulated in certain positions of the detection screen following the same patterns as in the case of water, sound or light.

So far, this may seem as the basics to many, or new information to others, but it is definitely within the grasp of logic. Things start to get fuzzier when we carry out the same double-slit experiment, firing one electron at a time. In this case, each electron individually just hits the screen at some point, not showing anything of interest. However, once we have fired enough electrons, we start to detect the same interference pattern, showing that they accumulate in the same way they did when sent in groups.

Figure 5 — Double slit experiment applied to electrons

How can this happen? How can one electron interfere with itself? This is where quantum physics can explain this otherwise unnatural phenomenon. This theory states that each electron is not in one specific place at a given time, but in a superposition of those possible states which, once measured, collapses to a position in the screen. In other words, the wave function that is associated to each particle represents the probability of this particle being at a certain position at a given time. When we make the measurement, the electron is found to be in any of those possible positions represented by its probability wave which, as such, respects wave properties such as interference patterns. This is known as the Copenhagen interpretation.

This probabilistic model is what generated Einstein’s famous phrase “God doesn’t play dice with the universe.”, to which Niels Bohr replied “Don’t tell God what to do with his dice.”.

As far as pop science goes, readers may be familiar with the thought experiment proposed by Erwin Schrödinger in 1935 which attempts to translate the consequences of superposition to the macroscopic world by enunciating the following experiment:

“A cat, a flask of poison, and a radioactive source are placed in a sealed box. If an internal monitor (e.g. Geiger counter) detects radioactivity (i.e. a single atom decaying), the flask is shattered, releasing the poison, which kills the cat. The Copenhagen interpretation of quantum mechanics implies that after a while, the cat is simultaneously alive and dead. Yet, when one looks in the box, one sees the cat either alive or dead not both alive and dead. This poses the question of when exactly quantum superposition ends and reality collapses into one possibility or the other.”

Entanglement

As the maths behind quantum physics were developed (we will see in a further post), Einstein discovered a consequence of this theory is the entanglement of particles, by which the state of one particle cannot be described independently from another entangled particle and, once one of both particles is measured giving a specific state, that of the other particle is automatically determined. In other words, measuring one particle collapses both the particle we are measuring and the entangled particle.

This can happen both at a local scale and a remote scale, being theoretically possible to reproduce this phenomenon across the universe. Let’s suppose we have two entangled particles, we keep one in our planet and ship the other to Alpha Centauri. While we don’t make measurements, both particles will be in an entangled superposition of states as predicted by their probability wave functions. However, the instant I measure my particle at Earth, its sister in Alpha Centauri will collapse.

Einstein presented this prediction together with Podolski and Rosen as a paradox, known as the EPR paradox, that was meant to show quantum physics was indeed an incomplete theory. This phenomenon could not be possible as it would violate locality (cause-effect) because it would allow an action taken in one place to affect a particle in another part of the universe, coordinating instantaneously, exceeding the speed of light. This was described by Einstein as a “spooky action at a distance”.

As said, the EPR paradox was meant to state that quantum physics was incomplete, not wrong, and therefore other possible explanations were sought, giving birth to the “hidden variable theory”. According to this theory, particles already know beforehand what state they’ll collapse to once measured, so they don’t coordinate across the universe, but are rather found in a specific state that was set from the start.

This would be similar as saying that, in the double slit experiment, particles aren’t in all states at the same time, but they are at one place any given time that is determined via the wave function. In other words, rather than at a superposition of states, the particle is already at the state read upon measurement.

Hidden variables turn quantum theory from a probabilistic to a traditional causal model, more coherent with other physics theories such relativity.

Bell’s inequality

It took some time to find an answer to the EPR paradox, while quantum physics continued to develop and be confirmed in more and more experiments.

In 1964, John Stuart Bell posted a paper introducing what is known as Bell’s inequality, which states that, should hidden variable theory be correct, certain inequalities must hold when observing quantum phenomena that are, in reality, violated. The only way to explain these physical observations would be through entanglement and instantaneous coordination, as stated by quantum theory.

In other words, the EPR paradox is just how the universe works and locality is, in fact, broken.

There is a home experiment that can be done to show this behavior with very basic materials: three polarized filters (a.k.a. three sets of polarized sunglasses). Light (photons) can be filtered according to its polarization, which can be seen as the direction in which the wave oscillates. Once we pass light through a polarized lens, only photons with a specific polarization will pass through it, or collapse to it, while others will be rejected.

Figure 6 — Light passing through a single polarized lens

If we were to place a second filter on top of it and rotate it, we would see that at a certain position, all light gets through and then, as we rotate it, fewer photons will pass until both polarities are orthogonal and hence all light is blocked.

Figure 7 — Two orthogonal filters block all light

What does the reader predict would happen if a third filter were to be introduced between both filters? It would be logical that light would still be blocked. However, the action of the middle filter acts on the photons modifying the probability that they’ll pass through the third filter yielding an unexpected result: light starts to come through.

Figure 8 — Light starts to pass when a third filter is applied in the middle

The intensity of light after these measurements (probability of photons being accepted or rejected) is not what we would expect according to a classical hidden variable model. Since entanglement starts to play a role in these measurements, we see that the percentages are different to those hidden variables predict. These inequalities are known as Bell’s inequalities and show that “spooky action at a distance” does indeed take place, particles are at a superposition of states according to the Copenhagen interpretation and, if entangled, collapse simultaneously to the corresponding state.

Summarizing

We have introduced a physical intuition to the basic concepts of quantum mechanics: superposition, probability wave functions and entanglement.

Quantum theory is very hard for our mind to grasp. Citing Richard Feynman, “If you think you understand quantum mechanics, you don’t understand quantum mechanics”.

However, this theory is the very basis of all semiconductors and has been critical for the technological developments of the second half of the twentieth century that led to the digital world we live in today. Without quantum mechanics, semiconductors, which led to transistors, which led to logic gates, which led to processors and personal computers wouldn’t exist today.

For further reading, please refer to the following Wikipedia entry: https://en.wikipedia.org/wiki/Quantum_entanglement

The maths of bits and logic gates

We’ll now define a mathematical framework that will allow us to work with bits and logic gates by performing operations on matrices and vectors. Why is this important? Quantum mechanics, as will quantum computing, is very daunting to the mind since it describes a world very different from the one we observe. Particles in a subatomic level seem to have a different set of rules from us, and these rules are very hard to explain through natural language. This is where mathematics come to help, because the maths behind these concepts isn’t that daring, and by following the maths, we’ll be able to arrive to the conclusions that will let us make the most out of quantum phenomena. In the words of David Mermin: “If I were forced to sum up in one sentence what the Copenhagen interpretation says to me, it would be ‘Shut up and calculate!”.

Bits are a unit of information that can take two possible values: 0 or 1. Logic gates are transformations that are applied to one or more bits, in order to produce an output.

There are four possible operations that can be performed when working with a single bit:

  • Identity: The input bit is equal to the output bit.
  • Negation: The input bit is negated in the output bit.
  • Set 0: Whatever the input bit, the output bit is zero.
  • Set 1: Whatever the input bit, the output bit is one.

Let’s define the bit states 0 and 1 as the following vectors:

Note: The notation |0> is called “Dirac Notation” of a vector. It is an abbreviated way of writing the vectors noted above inside what is known as a “ket”. The conjugate transpose of a vector is represented as a “bra” with the notation <0|. This notation is also known as the bra-ket notation.

Now we can define a logic gate as a matrix that, once multiplied with a vector representing the bit’s state, produces a resulting bit state. Let’s define the matrices that represent all four single-bit operations described above.

Identity

We are using basic matrix multiplication rules for the above calculations. If the reader has any doubts with these rules, please review basic matrix operations.

Negation

As we can see, by flipping the identity matrix we are also flipping the resulting vector, hence negating it.

Set 0

Set 1

Working with multiple bits

So far, we have defined a way to represent a single bit and how logic gates can be applied to it. Now we’re going to extend this in order to represent more complex multi-bit computations as performed by computers. When we have a set of bits, each one will have its own representing vector and, when applying a logic gate to the set of bits, we should combine these in order to be able to perform those transformations on the entire set.

One convenient way to do this, is through tensor vectors. A tensor vector is the result of the tensor product of n vectors, which for our purposes can be defined in the following way:

Regarding the four possible states two bits can have, these can be represented as their tensor products:

We can now define commonly known multi-bit logic gates as matrices applied on these tensors. Since our vectors are larger (2ⁿ where n is the number of bits), we need larger matrices, in this example a 2 by 4 matrix. For example, the OR gate would be:

If we apply this gate to all four possible two-bit combinations:

Hence, our OR gate.

The CNot gate

Finally, to close this article, we’ll introduce the CNot gate. This is a gate that is not very familiar in classic computing but could be easily defined, as we’ll see. In quantum computing, however, this gate will play a very important role.

The CNot gate, or controlled-not gate, has two input bits and two output bits: the control and the target bit. If the control bit is set to zero, the gate behaves as an identity on the target bit. If the control bit is set to one, the gate behaves as a negation gate on the target bit.

This gate can be defined by the matrix:

If we apply this gate to all four possible two-bit combinations:

Wrapping up

We have walked through the history of quantum mechanics and its main postulates. We have also defined a mathematical framework for working with computations on single and multiple bits, applying logic gates as transformations represented by matrices.

On the following article, we’ll apply all this knowledge to a new type of bit: the quantum bit or qubit.

Email us at us@urudata.com for any comments or questions!

Our purpose in Urudata Software is making our customers more efficient through technology.

We develop BPM (Workflow) and Document Management solutions and implement these with our experienced team, which analyzes our customers’ business processes looking for efficiency gains and automation opportunities. These projects allow for the generation of cost efficiencies, improved quality and predictability, better customer experience and availability of data for analytics.

--

--

Ignacio Cattivelli
Urudata Software

Software engineer, passionate about technology. Former teacher. Consulting Manager @ Urudata Software.