Quantum Computing and Artificial Intelligence

James Wall
The Quantum Authority
6 min readDec 22, 2017

“Quantum Computing and Artificial Intelligence”. I get the feeling that many readers will see this title and see two ‘buzz phrases’ right next to each other.

However, artificial intelligence (AI) will end up being one of the most prominent applications of quantum computing as quantum computers continue to be developed.

Why is this? Well, many current artificial intelligence applications have some sort of learning model. Most learning models consist of the following steps:

  1. Amass a significant amount of labeled data about the subject
  • For example, if we were interested in training an application to be able to tell if a dog is or is not in a picture, we would want to amass photos.

2. Label the data

  • Following our example from above, we would want to label each photo as “has dog” or “does not have dog”.

3. Feed the labeled data into the machine learning application

  • By “feeding the data”, we mean that we give the photo and its label to the application. At a high level, the idea is that the program should begin to pick up commonalities in photos that are labeled “dog” and commonalities in photos that are labeled “no dog”. These commonalities could be colors, sequences of colors, patterns in the photo, or shapes in the photo (which are discovered by the program via patterns in the pixels of the photo) to name a few.
  • At the end of this process, the program should have a decent idea what criteria in a photo constitutes “dog”, and consequently what lack of criteria constitutes “no dog”.

4. Train the program

  • At this stage, we give our program photos without a label, and the program attempts to determine on its own if there is or is not a dog in the photo. After the computer comes up with its determination, the human involved in this process inputs whether or not the computer was correct.
  • If the computer was correct, this test will reinforce its notions as to what there has to be in a photo to say if there is a dog in there. If the computer is wrong, it will add the patterns of the photo to its notion of what is or is not a dog, and adjust its future determinations appropriately.

5. Use the program

At a high level, that’s all there really is to artificial intelligence/machine learning.

(AI/ML experts, we know that there’s more to it than that. And we know there are more methods/learning models out there. This post is intended to introduce the concept at a high level in a relatively easy to understand way)

So what is the major factor that AI relies on in order to successfully make correct determinations? In our case, to determine whether or not there is a dog in a photo?

If you guessed “data”, you get a gold star. It is indeed, data. But not just any amount of data. Many applications of AI require millions of samples to train with in order to get successful results. And the crazy thing is, “millions of samples” usually is only a very small fraction of all available data in a given area.

Think about how many pictures of dogs there are on the internet!! If you have a dog, you’d probably posted at least a few photos of it.

Let’s try some Fermi analysis (aka guesstimating how many there are of something based on a few reasonable parameters). For argument’s sake, let’s say 1/5 of the approximately 8 billion people in the global population owns one dog. Let’s say further that, on average, each person has posted 1 photo of his or her dog somewhere on the Internet that is free and accessible to an AI application.

These assumptions implies that there are:

(8 billion people)((1 person)/(5 people))((1 photo)/(1 person)) = 1.6 billion photos of dogs available on the internet!

That is a massive amount of data. And I’m sure this is a relatively low estimate of the number of photos of dogs on the Internet. Believe it or not, it takes even computers a little bit of time to process through so many data points.

Let’s assume further that it takes a computer one one-thousandth of a second to process a photo (aka .001 seconds per photo).

At 1.6 billion photos, that gives us:

1.6 billion photos * (1 second / 1000 photos) = 1.6 million seconds = approximately 26667 minutes = approximately 444 hours = approximately 18.5 days = approximately 2.5 weeks.

2 and a half weeks! Now, there are faster computers then our assumption presumes, but even so I think a ballpark estimate of a few days to a few weeks is reasonable for processing that many images for the average consumer PC.

That is so slow!

In order to speed this process up, we can do one of three things. We can:

  1. Change the software to be faster
    OR
  2. Change the hardware to be faster
    OR
  3. Change both to be faster

Now, the speed of software is dictated by its algorithms. And algorithms are continually being developed to be faster while maintaining as much of their accuracy as possible. However, many algorithms have a lower boundary for their runtime. In other words, many algorithms have a maximum speed limit. This is for various reasons. To name one, many algorithms have to touch every element available to them. In other words, given a certain number of items to look at, categorize, or identify, the algorithm will have to perform its functions on every single one of those elements in order to get an accurate result.

So in the case of our dog photo problem, say we valued accuracy above performance but wanted to maximize the performance of our algorithm while maintaining close to 100% accuracy. We would need to scan each one of the photos in our data set which, no matter how fast our method is, means that our algorithm has to process each photo. That means however long it takes to process a photo, our algorithm has to take that amount of time for each photo.

Now, some of you are probably saying “Can’t we just reduce how much time it takes to process one photo?”

And you would be right. Indeed, that’s what a lot of software engineers do for their jobs. But that optimization can only take us so far. No matter how quick we make that processing, we still have to go through each photo for a certain amount of time, which will take time.

Alright, so it seems that option 1 (optimize software) might work, but that the marginal gains are pretty small at this point.

Ok, so how about option 2? What about hardware? Well, it turns out traditional binary-oriented hardware can only go so fast. Why? Check out our intro to quantum computers post. To summarize, most hardware uses different electrical voltages to represent ones and zeros. Electricity is fast, but it can only flip from ones to zeros and vice versa so quickly. Which means that there is a speed limit (which varies from computer to computer) on how fast hardware can perform.

Enter quantum computing. Quantum computers are designed to replace binary hardware and transcend the proverbial speed limit set in binary-oriented hardware by giving us more states than the basis states and by allowing us to transfer multiple bits per qubit.

Ok, so it seems like option 2 (optimize hardware) is pretty viable.

What about option 3 (optimize hardware and software)? Well, if we assume that we can optimize our hardware significantly via quantum computing, it stands to reason that we could optimize our software on top of that.

There are two steps to execute this option.

The first is computer language.

Microsoft announced in 2017 that it had started to develop the first high-level computer language for the quantum computer. In other words, they have started work on the first language that a programmer can use to develop an application for a quantum computer that utilizes the unique power given by quantum computing.

So we might have a computer language to code in for quantum computing. Awesome.

The second step is algorithms.

Most modern algorithms are dependent on the assumption of binary-oriented hardware. And most of those will run just fine on quantum computers as well. However, many of these algorithms can have further optimizations made because of the different architecture provided by quantum computers.

This includes AI algorithms. Let’s return to the dog photo analysis problem that we posed near the beginning of this article. We estimated that this would take anywhere from a few days to a few weeks to complete.

It is estimated that quantum computers with optimized AI algorithms could complete the same problem in seconds.

Want to learn more? Shoot us a message with your questions and comments.

--

--

James Wall
The Quantum Authority

Tech and travel enthusiast. Founder of the Quantum Authority.