MLearning.ai
Published in

MLearning.ai

Multimodal Mixture of Experts

Google’s LIMoE scales efficiently with State-of-the-Art performance in image classification.

Sparsely-activated Mixture of Experts (MoE) in Multimodal Constrastive learning
Photo by Austin Distel on Unsplash

Introduction

LIMoE is a multimodal image image classifier. It is Sparsely-activated.

  • 5.6B parameters, 675M parameters per token
  • Sparsely-activated Mixture of Experts (MoE) model

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store