ModeLIB — Handy CNN Library for Image Classification

Neil Wu
LSC PSD
Published in
3 min readApr 3, 2020

--

All you need is a proper library

Everyone who is interest in machine learning, must have walk through all these state-of-the-art papers: LeNet, Alexnet, GoogLeNet, VGG16, NIN, Inception, ResNet, DenseNet, Xception, ResNeXt, MobileNet, MNASnet, EfficientNet, et cetera.

By my own experience, none of them ring the bell while reading paper at first time. The next step I took is to look for source code that I can interpret and reproduce it. Although deep learning frameworks such as pytorch or keras provide default models, their source code of those models choose simplicity instead of legibility. And that’s when I realize there’s no single code repo contains all those models in same coding style with legibility.

We present ModeLIB, a single library for every SoTA convolutional neural network model, with same coding style and high legibility.

ModeLIB: https://github.com/lsc-psd/modelib-classification

Please give us a star if you like it. It means a lot to us.

General Infos (2020/4/2)

Framework: Keras/Pytorch
Available Models :
VGG16 / InceptionV3 / ResNet / DenseNet / Xception / ResNeXt / MobileNet

ModeLIB is designed as a service and as a tool at a same time. One can use train.py / test.py for instant service, and also can be easy import as a tool. We also try hard to simplify the code structure for every model to maximize flexibility, so that everyone can easily adjust structure within.
Feel free to grab any code inside, its WTFPL licensed.

ModeLIB is currently readied for image classification tasks only. Other tasks like ModeLIB for object detection is in the plan, will update when it done.

Usage

  1. Clone or download repo from https://github.com/lsc-psd/modelib-classification
  2. Pip install request packages
  3. Prepare data
  4. Setting up config file
  5. Train / Valid / Test

Data preparation

When ModeLIB played as a service, we want it as easy as possible. So we abandoned the common way to prepare labeled data, which is tons of images and an annotation files usually named labels.txt. The structure we adopt is simply folder and images showed as below:

Structure of how ModeLIB load data

Config setting

Instead of adding infinite argument in command line (which is somewhat annoying), we adopt config parser for better parameter setting experience.
Config file includes model_name, training folder path, validation/test folder path, checkpoint_path, img_height/width, batch_size, num_epochs, and learning_rates.

Sample of config file

As a SERVICE

Simply input python train.py or python test.py, is as easy as they seem!

As a TOOL

Import them by from .models.SOME_MODEL import SOME_MODEL .
If you are looking for a reference to copy&paste, you know better then I do.

We’re WTFPL feel free to take anything, just give us a star as a return.

Feel free to open any issue, we’ll be glad to answer or improve.

if you like(this_article):
please(CLAPS)
follow(LSC_PSD)
# Thanks :)

--

--