Productionizing Object Detection Models with Dash Enterprise
Self-driving cars are widely expected to be the future of personal transportation. Many companies, from startups to tech giants, have been investing in their R&D team in hopes of becoming the first to achieve full, Level 5 self-driving automation. One important feature that self-driving cars currently have is the ability to use Artificial Intelligence and Machine Learning to detect, localize, and categorize their surroundings — so they are able to make decisions in real-time based on the most accurate information. For this reason, the Udacity team released a dataset of driving scenes, where each object is annotated with a label (pedestrian, car, traffic light, etc.) and a bounding box designating its precise location in the image.
Real life isn’t accompanied with annotations, so having an object detection system is crucial to predicting a vehicle’s surroundings and ensuring a safe Level 5 navigation. Similar to deep learning models that can learn to assign labels to whole images, models like YOLO v3 have been developed and trained to accurately draw bounding boxes and assign labels to over 80 different types of objects.
However, when ML engineers want to adopt models like YOLO v3 for their own projects, they need a way to visually inspect the output. Additionally, they may need to share such applications with researchers, managers, and executives with varying levels of access. Dash provides accessible, concise, and scalable building blocks for building this type of tool in native, idiomatic Python — no JavaScript required. In the next sections, we describe the process of building an app that displays the output of YOLO v3 on publicly available images and compares them with human annotations.
Translating your ideas into a Dash app
As a first step, we built self-contained functions for displaying images, loading YOLO v3 and formatting bounding boxes from metadata files. Once we made sure that the functions were working, we created a simple layout with the components we wanted to display and made sure to only utilize a single callback. In a few hours, we built a YOLO Dash app in less than 350 lines of code, and we were ready to deploy and share it with the world. This app is hosted on Dash Gallery, and you can take it for a spin here.
The image, duplicated on both sides, shows a frame from the Udacity dataset. With the controls on the left, you can select an object you want to see. Then, you can sequentially or randomly select another frame to display. In the left-hand image, the human annotations are overlaid on the image using filled scatter traces, whereas the right side shows the predicted bounding boxes by YOLO v3, which are computed in real-time. We specifically chose this model since it is able to process up to 45 frames per second, which is perfect for real-time detection. Using the sliders on the right side, you can adjust the output of the model by changing the minimum confidence needed to display a box, as well as how much overlap you allow between two boxes of the same class. In this app, nothing is pre-computed: the YOLO v3 model runs on demand on each frame, and in response to changes in parameters.
Building modular and reusable code
In a typical data science notebook-based workflow, for example in Jupyter, when you want to parameterize the output of your notebooks, you might use IPython widgets like FloatRangeSlider
and Dropdown
, which are useful for quick prototyping. However, when you are looking to deploy and scale your code, those widgets
calls inside an otherwise self-contained model function are not always convenient, since you want your code to be as modular as possible in order to reuse it in other settings.
Because Dash layouts are declarative, there is a clear distinction between where a Dash-powered UI ends, and where your ML models start. This modularity makes it easy to reuse your ML functions for other purposes (e.g. to build a command-line interface) or to reuse your layout with a different dataset or model.
Each callback in Dash encapsulates a separate idea. In this self-driving object detection app, our core idea is to display the comparative output of YOLO v3 with a single callback. When you want to add more content to your app (e.g. add a model that predicts the 3D structure of the image), all you need to do is to update the layout and create a new callback. Of the 350 lines of code in this Dash app, two thirds are totally self-contained, ML-only code, and the remaining third is pure Dash layout and callback code, all nicely separated and unit-testable.
Interactive navigation with native Plotly figures
In Dash, Plotly figures are first class citizens. Just as you can use them for displaying scatter plots and bar charts, you can also use Plotly figures to make your images more interactive. For example, with Plotly images, you can easily build an interactive image explorer in about 40 lines of Python. You can zoom-in, pan, and resize each image to your liking. This feature is useful for detecting overlapping and inaccurate bounding boxes predicted by your detection model; such details might have been missed if we only displayed static images. Additionally, you can assign text on hover over to each bounding box, where you can include information such as model confidence.
Dash is the last mile for deploying and scaling AI initiatives
With Dash, it is easier than ever to take a complete and self-contained AI model and deploy it in production, where it can be used in many different scenarios: by a small group of managers and executives, a large group of data scientists and developers, and even hundreds of thousands of customers throughout the world. Since Dash is stateless and callback-based, your apps are easy to scale and can offer a tailored experience for each user. With a fully customizable layout, you can make your app look exactly the way you want, all while keeping it cleanly separate from your data processing and ML modelling code.
Are you interested in building Dash apps like this and deploying them in an enterprise environment on scalable, modern, Kubernetes infrastructure? If so, reach out to learn more. You can also check out all the ML examples in our app gallery to see different ways Dash can be used as the front-end for your Machine Learning projects.