Quantum Computing, Zero to Hero: Part Two — Computational Models

Gwilym Newton
12 min readJul 2, 2019

--

A guide to get you from foundation maths and physics, to writing a quantum algorithm and understanding everything between.

The Series

Part1

It might be news to some of you that there are lots of different models of computation, or even that something called a “model of computation” exists to begin with.

A model of computation is a description of how memory, communication and computational units are organised so that they can compute a function.

So how we:
1) Remember things
2) Transmit things, and
3) Work things out

The aim is for these to work together to do something useful. It’s not something we talk about very much, because while there are lots of models of computation, one of them won — or has been winning for a long time — the Turing Machine. I am 100% sure that whatever device you are reading this on is a Turing Machine. So we have long ago just assumed that a computer = a Turing Machine.

This was valid, because they pretty much all were. The problem is, while a quantum Turing Machine is possible, it’s not how we build them. We use the Circuit Model, which as you will discover is equivalent in power. So we are going to need to lean to put a few assumptions aside.

Turing Machines

Named as many you would guess after its inventor the famed Alan Turing,a Turing Machine is a mathematical model of computation.

It defines an abstract machine [The Computer],

which manipulates symbols [Computation]

on a strip of tape [Memory and Communication]

according to a table of rules [Computation].

This is a very abstract definition, approaching computing from a maths point of view. But, most importantly, take any algorithm and a Turing Machine capable of simulating that algorithm’s logic can be constructed.

An “algorithm”, by the way, is computer science speak for “a list of instructions for doing something”. If you wrote down all the steps to make someone a cup of tea, along with the checks you need to make (i.e “how many spoons of sugar”), you would have an “algorithm” for making tea.

Because we can always build a Turing Machine for any algorithm, we call this “Universal Computation”. An important note to take heed of here is that while the Turing Machine can run any algorithm, we don’t always have an algorithm for everything. In fact, we are missing a few ones you might find surprising; finding the shortest route between a set of cities, for example, is really hard for computers. So computers and Turing Machines can’t do everything.

Let’s go back to the Turing Machine. It’s a machine that mostly exists on paper, but if we were to build it, the Turing Machine has three main parts:

  1. The Tape: This is an infinite roll of tape. This tape can be written on, and written symbols can be erased or rewritten with different ones.
  2. The Head: This is the thing reading and writing the tape. The head can move up and down the tape and write, erase or re-write symbols.
  3. The Register: This is the memory of the machine. It holds the current state the machine is in.

All a Turing machine does is read and write from a piece of tape. At any step, it can write a symbol and move left or right. But even with this limited set of actions it can do all kinds of things. In fact, it can do all the things that any programming language does; hence “Universal Computation” !

This is how the internals of your desktop computer work.

  1. The Tape is the Cache/ Ram/ Hard Disk of your computer. They are all just fancy versions of a long tape of memory. Both the data you compute on, and the program that decides what to do next are stored here.
  2. The Head is the CPU. It can do all the maths required to decide which part of the “tape” to move to, and what to read and write.
  3. The Register is the register on your CPU. In practice there are normally a few of these, most notably the “program counter” that remembers where we are in a computer program.

If you’ve noticed that’s a bit vague, it’s because your computer is a particular implementation of a Turing Machine known as Von Neumann Architecture. [Von Neumann Architecture ]

[Further reading:]

Circuits

As I mentioned earlier there are many other models to the Turing Machine. Feel free to Google them, because for today we are only interested in one other type.

Next we are going to talk about circuits, as in the kind that live on a circuit board. While they are often used to implement a Turing Machine, they are a computational model of their own, and we will talk about how they interrelate later. First, we need to define a few things.

“Boolean function”: Boolean means True or False (One or Zero), and a Boolean function is a function that takes one or more Boolean values, and transforms them into one or more different Boolean values. The simplest is NOT. It turns a True into a False, and False into a True. Another example would be AND, which turns True only if both its inputs are also True. In other words:

False AND False = False
False AND True = False
True AND False = False
True AND True = True.

“Logic gate”: This is a physical device on a circuit board that realizes a Boolean function.. For example, let’s say we have a little box, and run two wires in, and one out. If that box only sends an electrical impulse out (True/One) when both the wires going in also have an electrical impulse, it’s a logic gate for an AND function.

So finally we can get to defining circuits. Logic circuits are straight lines that take a set number of inputs and pass them though logic gates, in order to create a set number of outputs.

There are a couple of important things to note from this.
1) There’s always a set number of inputs and outputs. A circuit made for a certain sized input can’t accept any others.

2) Straight lines mean no branching. The circuit can’t say IF (this is true) THEN (do this).

3) No branching means no looping. If you cant branch, you can’t decide when to leave the loop.

Despite this, a circuit is equal in power to a Turing Machine. For this, we need to accept two ideas (you can do the further reading if you need):

1) Circuits using just AND and NOT are Universal.

2) Universal Computational models are equivalent to each other [Strong Church Turing Thesis]

Reversibly

Reversibility is the ability to run a computation backwards(I will explain in a little while why this is so important).

It is possible to build reversible logic gates, and this is going to be very important. So let’s quickly look at a classical gate: the AND gate.

Looking at this, you can see we lose information when we go through the gate. If the output is a ‘False’ (Zero), we don’t know if the inputs were False/False or False/True or True/False. Because of this, we can’t reverse the process.

However, there is a family of reversible gate types, which we are going to see later in this series. In this case, you can see that instead of an AND we can use a TOFFOLI gate. It can be a little hard to understand, but when C=0, it acts as a reversible AND gate. We will go into details on individual gates later.

[Further Reading]

Entanglement.

So I promised I would get to why we need to make the gates reversible, and it all comes back to entanglement [Refresher here].

You see, entanglement is about information. The state of one particle is entangled with another, and when we make a non-reversible gate we lose that engagement.

Let’s imagine you (Alice) and your friend (Bob) are playing a (weird and boring) game.

You each have a stone with a white side, and a black side.

When you play, Bob shows his stone to Alice. If it’s white side up, Alice has to flip her stone over (for those of you paying attention, our game is a Controlled NOT gate, or a CNOT gate).

So to write it out as a table we would get:

Now, let’s put Bob’s stone in superposition [Refresher here] so its state is White/Black with a 50% change of being in either state.

You can see they are now entangled.
If you want a quick refresher on entanglement, it’s [Refresher here], but quickly it means two things are in a superposition of states, such that by manipulating one, if affects the other.

We don’t know what side Alice’s stone is on, because it’s based on Bob’s. Until we measure Bob’s (look to see if the black or the white side is up), we can’t know what happened to Alice’s.

For instance, we could “measure” Alice’s stone. Let’s say when we measure it it’s white, and we remember it was white before we did anything. We know it hasn’t flipped, and according to the rules of the game that would only happen if Bob’s stone is black, so we now KNOW it’s black.

We could also do it the other way around. If we measured Bob’s stone and it was black, and we know Alice’s was white to begin with, then we KNOW Alice’s is now white without looking at it.

This is entanglement.

There is another thing we can do: “reversibility”. Let’s change the game up a bit, and say Alice and Bob have such terrible memories that as soon as you play each round they forget everything that happened before.

We can get rid of the “before parts” because of their terrible memories.

What we could work out is “Measure” of them. Let’s say we look at the stones and both Alice’s and Bob’s are white.

Okay, so according to the rules, Bob’s will have stayed the same (o his was white before), and Alice’s flips if Bob’s was white, which it was, so Alice’s must have been black.

This is reversibility, the ability to construct the past from the present, in order to run time backwards.

So, this is the “reversible” version of the game. Let’s change the rules up again, and give Bob sudden amnesia and an urge to throw rocks.

So after the game is played, he throws his rock away and forgets what side it’s on.

So, we can measure Alice’s.

We can’t measure Bob’s because he forgot, and threw it away.

As a result they can’t be entangled, because while you could measure Alice’s and work out what Bob’s was, you can’t measure Bob’s to work out what Alice’s was.
Also, we can’t reverse it, because we are missing part of the information. If we measured Alice’s and found it was white, we don’t know if it was flipped to white, or was always white.

Hopefully you can see from this.

Entanglement = “Preservation of Information” = Reversibility

This is why we need reversible gates, because lost entanglement is bad. We will explain why in a few more articles as it requires some maths, but you deserve a crumb of truth.

People sometimes say a Qbit (Quantum Bit) is equal to 2^n classical bits.
So 4 Qbits = 16 classical bits,
8 Qbits = 256 classical bits.

16 Qbits = 65,536 classical bits.

This performance improvement is MASSIVE, but what this really means is 16 fully entangled Qbits = 65,536 classical bits. The entanglement is what gives us this power, and because we lose entanglement if we lose information, we have to make quantum computing reversible.

Complexity

Okay, here is the last thing we are going to learn today. It’s about how complex algorithms are, and thus how we can compare them.

First, Big O. This is the way we describe the worse case of an algorithm’s performance(e also have a little O, and a Theta, but we don’t use them as worst cases are the most important the majority of the time).

So the Big-O of an algorithm says: “If you gave the worst possible input to an algorithm, this is how badly it would perform.”

So let’s think about finding data. We are going to lay it out in two ways. You are going to have a number of boxes — “n” — and one of them has your data in it.

The first way we are going to lay out the data is to put all the boxes in a row on the floor, and you can chalk a number from 0 to n in front of each. We will call this THE ARRAY.

We are going to take the same boxes, write a number from 0 to N on the top of each of them and stack them all up on top of each of them at random. We will call this THE STACK.

Now, I will tell you I want the data from box… 15.

If you went to THE ARRAY, you could just walk up to box 15 because it has a chalk number in front of it. We would say this has “Big O 1” or O(1) because it took one attempt to find the box, and no matter what number I gave you, it would always take one attempt even in the worst case.

Next, if you went to THE STACK you would have to check the top box, and if it was not box 15, you would need pop a box off the top and look at another, and then maybe another. In the worse case box 15 would be at the bottom, and you’d need to take every other box off one at a time. So the worse case is you have to move “n” boxes (the total number of boxes), so we would say this has “Big O n” or O(n).

You can clearly see that for getting to your box, THE STACK is worse than THE ARRAY, but now you can explain why. O(n) is worse than O(1). This is how we compare algorithms; you can see the Big O of more them at [Big O cheat-sheet — http://www.bigocheatsheet.com/].

Okay. The next thing you need to understand is what we computer scientists mean when we say an algorithm is complete.

“A complete algorithm always gives the best answer to the problem its solving”

For example, what if we had an algorithm that stacked all the stuff in your cupboard in the least possible space? If you could prove that every time it stacked everything, it always gave the optimal answer, and there would never be a better way, we would say that algorithm is “Complete”

The last thing is just to check you know what a polynomial is. It’s just this:

…x⁴+x³+x²+x +c

It’s an equation that matches that patten.

So you’re ready now to see the biggest and most important question in the whole of computer science. Are you ready for it?

P = NP ?

A bit underwhelming, isn’t it? Let’s explain what it’s trying to say.

There are problems for which there are complete algorithms, where the complexity (Big O) is a polynomial O(xn+…+…x3+x2+x +). We call these P.

There are problems for which there are NOT complete algorithms, where the complexity (Big O) is a polynomial O(xn+…+…x3+x2+x +). We call these NP.

The question is, have we simply never found the complete algorithms for the NP problems or do they not exist?

Is NP part of P, and we just don’t know it?

P = NP?

Why did you need to know this? Well, the question is, are better quantum algorithms out there? For some problems, are there quantum algorithms with better performance (smaller Big O)?
Yes, actually, this is something we do know.

Are there polynomial time complete quantum algorithms to the NP problems (maybe)?

Onward to Quantum Computing

Lets quickly review what we learned.

  1. What a Turing Machine is, and the model of how existing computers work.
  2. An alternative model; the circuit model of computing.
  3. That reversible computing allows us to go backwards in time when doing computation, and return to past states.
  4. That reversible computing is a requirement of entanglement, and engagement gives quantum computing its power.
  5. How to compare the complexity of algorithms, and a glimpse at the idea that for some problems, quantum can be better.

--

--

Gwilym Newton

Experienced Technology Specialist, Skilled in Emerging Technology such as Brain-computer Interfaces, Virtual and Augmented Reality, Blockchain and ML