Comments on Qiskit Summer School 2021 — Quantum Machine Learning
The Qiskit Summer School is an IBM-sponsored event to promote quantum computing. Qiskit is the name of the language developed by IBM. This year, it was the second edition of the event, and the theme was Quantum Machine Learning.
A first curiosity is that this year there were 5000 students (last year, 2000), and the vacancies ended in a few minutes after the opening of registrations. A lot of people complained that they entered exactly at opening hours and already saw the number of vacancies sold out. This shows the growing interest in the topic — and the privilege of attending classes.
There were two weeks of classes, and 5 rounds of labs — each with a variable number of graded exercises.
Quick summary of content.
- Basic circuits
- Simple algorithms
- Noise in quantum computers
- Classic machine learning
- Quantum Classifier
More than half of week 1 was basic content, it wasn’t until the very end that it really started to get fun. The content included classic machine learning (neural networks, backpropagation), but the focus was on Support Vector Machines, a slightly different classification technique.
Finally, an introduction to the QAOA optimization technique, which is a mixture of quantum and classical: the problem is represented by a quantum circuit, but the optimization of parameters is via classical methods.
The basic circuits lab was easy, the SVM and QAOA took a little more work, but nothing impossible. I cannot divulge my answers at this time. I will wait for IBM to officially release the course content to the general public.
- Linear classifiers
- Quantum Kernel
- Quantum Circuit Training
- Hardware and noise
- Advanced capability and circuits
- Closing and discussions on future directions
The best part of the course was week 2. State-of-the-art knowledge, with content that hasn’t even been three years since the first citation.
Support Vector Machine has a limitation: it is linear. It is possible to do a trick to work around: take the same data and represent it non-linearly, using a feature map. The kernel is a generalization of the feature map, for a more complex function.
There may be exponential quantum advantage if the function is complicated to represent classically and simple to represent quantum — but this is not always guaranteed either.
Loading data into quantum states can have exponential costs, which can wipe out any quantum advantage — one possible solution is to use an approximate data scheme, losing a small part of the information.
Very impressive is a researcher named Amira Abbas. She taught three classes or more. Very didactic, young (as well as everyone there), she really mastered the content — for those interested in the topic, it is worth following her.
A very interesting discussion was that of capacity. In classic machine learning, we always have the underfitting (too weak model) — overfitting (too many parameters, which causes loss of generalization) dilemma. I always thought that this was just depending on the size of the model, but it has been shown, through a random label experiment, that the data also play a role. Since then, there have been several proposals to measure capacity not just in terms of size, but very specific characteristics of the data as well.
About the labs: in general, it wasn’t difficult. It was basically following the script given by the teachers. A few lines, and most were ready. I have the impression that the Summer School focuses more on the classroom and theoretical content, and less on the labs themselves. For difficult code, IBM throws up the challenges.
I thank IBM for the initiative, the highest level content made available, and congratulate its leadership on the topic.
In the feedback form, I asked for a photo to help publicize the event, and this was the one I sent.