QNN with PennyLane — Overview of the steps

Anjanakrishnan
3 min readSep 23, 2023

--

Day 23 — Quantum30 Challenge 2.0

Today, let’s again look at quantum neural networks, but this time, using PennyLane and TensorFlow. The following article is an overview of everything mentioned in the QNN chapter from PennyLane.

Introduction

https://pennylane.ai/static/22b272db610a352a289d1e73d66751a6/40601/qnn_1.png

We’ll look at an example of VQA (Variation Quantum Algorithm) which has the advantage of varying or tuning parameters while solving the problem. These algorithms are widely used for Optimization of a mathematical function alongside molecular chemistry problems, finance modelling, cryptography, etc.

Now, even though VQAs are used for optimization problems, a challenge is encountered for optimising the tunable parameters which play an important role in the optimization problem statement. So, in summary, we have two optimization procedures: The tunable parameters and the given problem statement.

The former is tackled using classical machine learning, more specifically, the recurrent neural network (RNN); an algorithm known for its ability to retain past information and use it for decision-making and predictions. The RNN plays a key role in optimising the parameters of a Variational Quantum Circuit (VQC), a central component in quantum computing.

Now, let’s glance at how the RNN is used for optimization of the Variation Quantum Circuit’s (VQC) parameters:

  1. Initialization: The VQC starts by evaluating a cost function (denoted as “y0”) using certain parameters (denoted as “x0”) on a quantum computer. The cost function measures the error between the desired outcome and the quantum circuit’s output.
  2. RNN’s Role: The RNN takes the cost function result (yi) and its own hidden state (hi) as inputs. It uses this information to propose new parameters (x1) and updates its hidden state to (h1), suggesting better initial parameters. The RNN is then trained to minimise y1, effectively learning how to set up the VQC for optimal performance.

This iterative process equips the RNN with the ability to provide initial parameter suggestions for new, unknown graphs, essentially teaching the RNN how to prepare the VQC for optimal results.

Now that we have understood how RNN is helpful, the further part of the article focuses on solving the Max-Cut Problem using the Quantum Approximate Optimization Algorithm (QAOA) in conjunction with Quantum Neural Networks (QNN). This process is divided into four main stages:

Initialising: Generating the training data and complementing RNN

Circuit building: Using PennyLane’s built-in packages, a QAOA quantum circuit is created.

Training: Training the optimal parameters for the quantum circuit based on a dataset of graphs

Results: Checking how the algorithm works with a new graph that is not present in the training dataset, Plotting the loss function and comparing with the standard SGD(Stochastic Gradient Descent).

For gradient descent we compute the gradient of the entire dataset, however, for SGD, it divides the dataset into smaller random subsets (mini batches) and calculates gradient for each minibatch. Due to the random subsets, it can help in the optimization process to escape local minima and explore the whole parameter space effectively. SGD may not converge to absolute minimum but often finds a good solution and hence, is particularly useful for training large neural networks due to its effective computation.

And in the comparison, it is observed that the combination of RNN with QNN, effectively, conveniently, and rapidly minimised the cost function compared to just SGD in fewer iterations.

Reference

QuantumComputingIndia

--

--