Boost image classifier performance(Part 1) — mixup augmentation with codes

Devi Prasad Khatua
2 min readMay 26, 2020

--

This will be a series of extremely short tips and tricks to boost any image classifier performance using plug and play approach

Introduction

With the increase in the complexity of deep neural network architectures, it’s often very easy to induce memorization (which leads to overfitting) and high sensitivity to adversarial examples

Simple data augmentation like rotation, flip, distortion, obstruction, etc. has proven to improve generalization performance (post coming soon), a simple fact that adding more perspective variation of the same image makes it difficult for the model to memorize

Different augmentations

Mixup augmentation is another augmentation technique that is, in simple terms “blending of two images” which reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks

Mixup is quite simple and data-agnostic which will be a plug and play routine in most cases.

Algorithm/Pseudo-code

blended_img = alpha*img_a + (1 − alpha)*img_b
blended_label = alpha*y_a + (1 - alpha)*y_b

Mixup in Action

Mix-up works by blending 2 images with alpha % from image_1 and (1-alpha) % from image_2

Python Code

Code for mixup of two images x1 and x2 with one-hot encoded labels y1 and y2 respectively

def mixup(x1, x2, y1, y2, lambda_=0.5):
x = lambda_ * x1 + (1 - lambda_) * x2
y = lambda_ * y1 + (1 - lambda_) * y2
return x, y

Demo notebook

Pytorch pipeline

Below, I have compiled the codes to train an end-to-end image classifier on CIFAR-10 dataset using mixup augmentation, and you can directly run it on colab!

--

--