Transfer Learning in QML — A Beginner’s Overview

Anjanakrishnan
3 min readSep 26, 2023

--

Day 26 — Quantum30 Challenge 2.0

Transfer learning in the context of neural networks is a powerful technique that can be likened to leveraging knowledge and skills acquired from one domain to excel in a related but distinct domain. To put it simply, it’s analogous to learning how to cook one type of dish and then using that culinary expertise to prepare a different dish.

In the realm of neural networks, transfer learning entails starting with a pre-trained model. This model, typically a neural network, has already been trained on a substantial dataset for a specific task.

Instead of crafting a new model from scratch for each distinct task, transfer learning allows us to adapt and fine-tune the pre-trained model for the new task at hand. This adaptation involves modifying the pre-trained model (A modified to A’), often by removing some of its final layers, and then connecting it to a new trainable network (B) that aligns with the characteristics of the new dataset.

https://pennylane.ai/_images/transfer_learning_general.png

The different implementations and the selected approach

Depending on the nature of A and B, one can have different implementations of Transfer Learning:

Let us focus on the CQ model. Here, we use a pre-trained classical model to help solve a new task using a quantum model.

Let’s imagine we have a pre-trained model (A) that has become exceptionally proficient at recognizing diverse objects in images. It has learned this skill from a vast dataset like ImageNet, more specifically the ResNet18 program. To make it suitable for a different but related task, we surgically remove the part of the model responsible for making the ultimate classification decisions. What remains is a pre-processing block capable of transforming high-quality images into abstract features or characteristics.

Now, to bridge classical and quantum computing approaches, we link this pre-processing block with a quantum circuit. The pre-processing block takes care of feature extraction from the images, while the quantum circuit plays a crucial role in making the final decision based on these extracted features. To ensure the system can effectively recognize new data, it undergoes training using a dataset specific to the new task. For example, if the objective is insect classification, the Hymenoptera dataset, containing images of ants and bees, could be employed for training purposes.

https://pennylane.ai/_images/transfer_learning_c2q.png

This entire process can be visualized as a data processing pipeline, where each step in the pipeline plays a distinct role in the transformation of raw images into precise classifications. Transfer learning, as a whole, enables us to harness knowledge acquired in one domain, adapt it using both classical and quantum components, and efficiently tackle fresh challenges that may emerge in related domains.

Reference

QuantumComputingIndia

--

--