Hands-on Machine Learning Training
HaMLeT (Hands-on Machine Learning Training) is a machine learning course offered in the Electrical Engineering M.Sc. study course at RWTH Aachen University, Germany. In 2020, we offered it completely online for the first time and decided to make it publicly available to the machine learning community!
The course starts at the basics of Machine Learning (ML) and progresses towards the state-of-the-art of Deep Learning. After completing the course students will understand both the theoretical concepts of ML and how they transition to code. To this end, we use publicly available datasets like the Iris Dataset and MNIST. Our course covers mostly Computer Vision related ML. However, concepts learnt in the course can easily be applied to other fields as well.
All materials are available in our GitHub repository. The course is divided into nine sessions with a theoretical phase and a practical phase each. For the theoretical phase, preparatory material can be found for individual sessions in the PreparationSheets folder. It covers a theoretical introduction of the respective topic and further suggestions for deeper research into individual topics. For the practical sessions, Jupyter Notebooks are available for download in the Notebooks folder. All coding exercises can be imported to Google Colab or similar services, which allow executing code without specialized hardware. The Jupyter Notebooks can of course also be run locally, if a GPU is available!
This course is designed for students with various backgrounds at the beginning of a STEM master’s degree. Thus, while a basic understanding of mathematical analysis and linear algebra is recommended, no prior knowledge of ML concepts is required. Further, even as the first session offers a jump start into scientific programming in python, prior programming experience is advantageous.
The course offers nine topics that can be pursued either in due order, if one is unfamiliar with the contents, or individually, if one has knowledge of the respective previous sessions. Starting with the basics in programming for computer vision as well as machine learning, the sessions progress towards state-of-the-art Deep Learning methods.
Session 1: Basics The first session covers the basics, including an introduction to programming in python. Students learn to work with images and perform simple image manipulations like normalization and standardization, which comprise typical pre-processing routines for ML algorithms. They test their understanding of the session by implementing the algorithm for Principal Component Analysis (PCA) step by step with the help of the Iris dataset, which includes interpretable features. In the process, they get a better understanding of some ML concepts as well as PCA.
Session 2: Classification In the second session, students exploit their understanding of the Iris dataset from Session one and learn classification. Here, they implement their own classifier from scratch to grasp the concept. The session further focuses on supervised learning techniques like Support Vector Machines and k-nearest neighbor for classification. It also includes simple clustering approaches such as k-means clustering. By the end of the session, the students are expected to have a clear understanding of the concepts of supervised and unsupervised ML techniques together with common classification approaches.
Session 3: Features The third session covers classical approaches for machine learning on images. The focus is on understanding properties of feature vectors and how to apply them in image classification. At the beginning, students implement the Local Binary Pattern feature extractor. Later, the focus shifts towards understanding the difference between various feature descriptors based on common image-processing libraries.
Session 4: Evaluation This session aims at teaching the students several important aspects when working on ML and DL algorithms. They will understand the importance of proper data handling, so that they can firstly, avoid biases while training, and secondly, make the most of the available data, using methods like cross-validation. Furthermore, the session also introduces hyperparameter optimization and lastly, methods to evaluate trained models in a fair manner with various methods for assessing classification performance.
Session 5: Neural Networks Here, students implement a neural network from scratch. During the session, the basics ideas behind deep learning such as hidden layers, neural activations and error functions are introduced.
Session 6: Back-Propagation After the introduction of the basic ideas of neural networks, the students will program the backward pass of neural networks on their own. Further, practical concepts such as modularity of layers and the structure of 4D-tensors are established. Together with session 5, the students will have implemented a simple backbone of a modular and high-level deep learning framework and the end of session 6. We believe that this leads to a better understanding of current deep learning frameworks such as PyTorch.
Session 7: Deep Learning Going through session seven, students will get an introduction to PyTorch. They will implement a slightly adapted version of VGG16 on their own, and train it on the large PascalVOC dataset. This is the first computationally intensive notebook, and the use of high-performance hardware is recommended. If you do not have access to a GPU, a service such as Google Colab can be used.
Session 8: Segmentation The next session continues towards image segmentation. Different concepts and state-of-the-art neural architectures for semantic segmentation with deep learning are implemented and discussed.
Session 9: GANs Finally, the last session covers Generative Adversarial Networks (GANs). The classical “Goodfellow-like” GAN is treated, and even fully convolutional GANs (DCGANs) are introduced. At the end, students can generate photorealistic images from random vectors! With this, the students will have understood the most important topics of Computer Vision with Deep Learning, and are ready to start their own research project.
How to get started
- Download the materials from our public GitHub repository and go to the session you want to start with.
- Take a look at the preparation sheet, read and understand the literature provided there.
- Start the Jupyter notebook (either locally on your computer, on Google Colab or with a similar service) and you’re ready to go!
- How much time do I need to go through one of the topics? Based on a survey from our students, the typical preparation time is 5 to 7 hours, and it takes about 3 hours to complete the practical training.
- How do I use Google Colab in combination with your materials? Select the notebook you want to work on, (e.g., https://github.com/rwth-lfb/MLL/blob/master/Notebooks/7_DeepLearning/7_DeepLearning_students.ipynb for the Deep Learning notebook), go to https://colab.research.google.com/, click on File, then Upload notebook, then GitHub, paste the link and click on the search button. You’re done!
More questions? Reach out to the HaMLeT-Team at firstname.lastname@example.org!