When thinking of data science and machine learning, two programming languages, Python and R, immediately come to mind. These two languages have support for every common machine learning algorithm, preprocessing techniques and much more and can, therefore, be used for almost every machine learning problem.

However, sometimes an individual or company can’t or doesn’t want to use Python or R. This can be because of one of many reasons including already having a code-base in another language or having no experience in Python or R. One of the most popular languages today is C#, which is used for many applications…


Increase your accuracy by combining model outputs

Photo by Pankaj Patel on Unsplash

Nobody can know everything but with help, we can overcome every obstacle. That’s exactly the idea behind ensemble learning. Eventhough, individual models might produce weak results combined they might be unbeatable.

And ensemble models are exactly that — models consisting of a combination of base models. The only difference being the way they combine the models which can range from simple methods like averaging or max voting to more complex like Boosting or stacking.

Ensemble learning techniques have seen a huge jump in popularity in the last years. This is because they can help you to build a really robust…


This article is a continuation of my series of articles on Model Interpretability and Explainable Artificial Intelligence. If you haven’t read the first two articles I would highly recommend you to do so.

The first article of the series, ‘Introduction to Machine Learning Model Interpretation’, covers the basics of Model Interpretation. The second article, ‘Hands-on Global Model Interpretation’, goes over the details of global model interpretation and how to apply it to a real-world problem using Python.

In this article, we will pick up where we left off by diving into local model interpretation. First we will take a look…


What features are important and why

Photo by Bram Naus on Unsplash

This article is a continuation of my series of articles on Model Interpretability and Explainable Artificial Intelligence. If you haven’t, I would highly recommend you to check out the first article of this series — ‘Introduction to Machine Learning Model Interpretation’ which covers the basics of Model Interpretability ranging from what model interpretability is, why we need it to the underlying distinctions of model interpretation.

In this article, we will pick up where we left off by diving deeper into the ins and outs of global model interpretation. First we will quickly recap on what global model interpretation and why…


Figure 1: ML Studio Example (Link)

In the last weeks, months and even years a lot of tools arose that promise to make the field of data science more accessible. This isn’t an easy task considering the complexity of most parts of the data science and machine learning pipeline. None the less many libraries and tools including Keras, FastAI, and Weka made it significantly easier to create a data science project by providing us with an easy to use high-level interface and a lot of prebuilt components.

In the last few days, I tried a new cloud product promising to let none-coders create their own machine…


Speeding up machine learning models in a small form factor

Coral USB Accelerator

Last year at the Google Next conference Google announced that they are building two new hardware products around their Edge TPUs. Their purpose is to allow edge devices like the Raspberry Pi or other microcontrollers to exploit the power of artificial intelligence applications such as image classification and object detection by allowing them to run inference of pre-trained Tensorflow Lite models locally on their own hardware. This is not only more secure than having a cloud server which serves machine learning request but it also can reduce latency quite a bit.

The Coral USB Accelerator

The Coral USB Accelerator comes in at 65x30x8mm, making…


Figure 1: Photo by Christopher Gower on Unsplash

Regardless of what problem you are solving an interpretable model will always be preferred because both the end-user and your boss/co-workers can understand what your model is really doing. Model Interpretability also helps you debug your model by giving you a chance to see what the model really thinks is important.

Furthermore, you can use interpretable models to combat the common believe that Machine Learning algorithms are black boxes and that we humans aren’t capable of gaining any insights on how they work.

This article is the first in my series of articles aimed to explain the different methods of…


Productionizing your ML model using the Flask web framework

Figure 1: Photo by Jared Brashier on Unsplash

Even though pushing your Machine Learning model to production is one of the most important steps of building a Machine Learning application there aren’t many tutorials out there showing how to do so. Especially not for small machine or deep learning libraries like Uber Ludwig.

Therefore in this article, I will go over how to productionize a Ludwig model by building a Rest-API as well as a normal website using the Flask web micro-framework. The same process can be applied to almost any other machine learning framework with only a few little changes.

Both the website and RESTful-API we will…


Create deep learning models without writing code

Figure 1: Ludwig Logo (Source)

Uber’s AI Lab continues with open-sourcing deep learning framework with there newest release which is called Ludwig, a toolbox build on top of TensorFlow that allows users to create and train models without writing code.

Finding the right model architecture and hyperparameters for your model is a difficult aspect of the deep learning pipeline. As a data scientist, you can spend hours experimenting with different hyperparameters and architectures to find the perfect fit for your specific problem. …


Creating you own object detector using the Tensorflow object detection API

Figure 1: Detecting microcontrollers

Update 27.07.2020: The Tensorflow Object Detection API now officially supports Tensorflow 2. You can find a Tensorflow 2 version of this article here.

Object detection is the craft of detecting instances of a certain class, like animals, humans and many more in an image or video.

The Tensorflow Object Detection API makes it easy to detect objects by using pretrained object detection models, as explained in my last article.

In this article, we will go through the process of training your own object detector for whichever objects you like. …

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store