Barlow Twins, Self Supervised Learning Model Explained.

Konstantinos Poulinakis
8 min readNov 14, 2022

“ Barlow Twins : Self-Supervised Learning via Redundancy Reduction “ paper explanation. Barlow twins is an architecture for self supervised learning (ssl). The information presented in this article are derived after thorough study of the Barlow Twins paper and official Barlow twins github repository code.

Two tall twin towers with a cloudy background. Grayscale image
Photo by Lijo Joseph on Unsplash

Contents

  1. Introduction
  2. The intuition behind Barlow Twins
  3. Barlow Twins Loss Function ( Intuition + Math )
  4. Implementation of Barlow Twins
  5. Python Code for Barlow Twins
  6. Advantages of Barlow Twins

Introduction

Barlow Twins architecture was first presented in the paper “Barlow Twins: Self-Supervised Learning via Redundancy Reduction” in June 2021. Barlow twins got its name from neuroscientist H. Barlow whose work in 1961 inspired this architecture. It is a state of the art ssl method that relies on an intuitive and simple to implement idea.

The goal of recent Self Supervised Learning research is to learn embeddings invariant to distortion of the input sample. This is achieved by applying augmentations to an input sample and “driving” their representations as close as possible. But a recurrent issue is that there are…

--

--