Published in


Introducing UpStride’s Open-Source Image-Classification API

Production-grade image classification made easy

UpStride logo

Hello World,

We’re introducing an open-source repository with production-grade code for image classification.

We’re starting off our blog series by sharing with you our first open-source repository: an image-classification API.

Image classification is the most well-known task in computer vision — it’s a necessary step in creating model backbones that can easily be used for any other task. We thought we could help the community on this subject by sharing some of our internal work.

Why We Created This API

Following the rapid development of deep learning and computer vision, a lot of information and open-source code have been produced and are available. Using resources from GitHub, TensorFlow, Keras, and more, any deep learning practitioner can easily kick-start an image-classification project without much effort.

However, while developing our own models, we had the feeling that in a lot of cases, we couldn’t avoid spending too many hours writing Python code — we still had to redevelop some pieces, and sometimes reinvent the wheel, when it came to the creation of data pipelines, the setting up of data-augmentation strategies, the fitting of a neural-net architecture to a specific data set, or the optimization of our scripts for training on multiple GPUs.

In fact, we even realized that each of us had internal repositories for the different models and data sets. This meant we didn’t have proper definitions of the different neural-nets architectures (e.g., MobileNet, ResNet, VGGNet, nasnet, etc.).

Discussing with other developers and deep learning teams, we quickly understood we weren’t the only one feeling these inefficiencies.

Introducing Our Simple and Modular API for Image Classification

We created a modular and simple open-source repository for image classification to allow deep learning practitioners to save a lot of time while benefiting from state-of-the-art and production-ready pieces of code.

More specifically, we had the following objectives in mind:

  • Provide production-grade code for the main neural-network architectures that could be used as a reference within an engineering team
  • Provide the right tools to create an efficient data pipeline to make the best use of memory
  • Provide a list of off-the-shelf data-augmentation strategies, depending on one’s need
  • Provide several strategies to train models on multi-GPUs

Ultimately, we hope that more people can easily reach state-of -the-art models without much effort.

How Can You Use It?

Everything is pure TensorFlow — with its latest version. It’s super stable and reliable, and it can scale up to a data center of GPUs. We have many functions preprocessing that you can configure. The training-model part is in TensorFlow 2, where everything is neat and clean. The export part is the same — you can export in few lines of code.

You only have to write a simple configuration file to leverage the API. Then you can specify which model architecture you want, point to your data folder, list the different data-augmentation strategies, and set the hyperparameters.

On top of this, you can run it with a simple command:

Make run starts the environment
Make run starts the environment

Clean neural-net architectures

Start your project with verified and tested neural-net architectures, such as MobileNet, ResNet, VGGNet, EfficientNet, nasnet, and more. All of these are verified and tested internally.

Data-augmentation strategies

Choose from a list of options — ranging from random geometric transformations to color jittering.

Data-pipeline creation

Use the latest optimization from the TensorFlow data pipeline, with TF record processing, cache management, and data parallelism already set up.

Training on multiple GPUs

Make the best use of your computing resources by choosing from a series of options, such as distributed strategy, mirror strategy, asynchronous strategy, Horovod, and more.


Quickly export or save your trained model, and start deploying efficiently using TensorRT.


Quickly collect logs from TensorBoard, and choose from a list of options for visualization.

Our code is also intended to be very modular. If you want to implement new strategies or training methods, such as architecture search, it’s relatively easy. The only part that changes is the training part. Instead of just doing a loop and fitting models, you can do two loops — one that’ll fit the model and one that’ll fit the architecture search. If you want to implement a new training paradigm, it’s also possible.

Within UpStride, we use the API to gather all of our classification projects. It helps us avoid creating long Git branches, ensures we can easily reproduce experiments, and makes sure our benchmarks are run on the same source code as TensorFlow and that any improvements made on code features impact everyone.

Where to Find It

Here: UpStride Classification API.

We hope this API can be helpful to you. Don’t hesitate to share it with your friends and colleagues, leave a star on GitHub, or send us some feedback at

Oh, one more thing: We’ll soon be sharing a second repository for segmentation and object detection — stay tuned!



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store