Using MLFlow and Docker to Deploy Machine Learning Models

Paul Bendevis
4 min readFeb 24, 2020

--

Modern Data Science Stack

This is a walkthrough on how to productionize machine learning models, including the ETL for a custom API, all the way to an endpoint serving the model. We will look at how to deploy machine learning models behind a URL to be used in production on a kubernetes server. We will look at the functionality of MLFlow and how it assists in the model deployment lifecycle.

Summary:

  1. Train Model
  2. Build API as SKLearn Pipeline
  3. Save Pipeline + Model as MLFlow model
  4. Build docker image with start script to call MLFlow to serve the model
  5. Run as a deployment + service in Kubernetes

Training Model

This is where all of your data science work and build the model you want to put into production. We will use the iris dataset and build a simple classifier.

from sklearn.datasets import load_iris
from xgboost import XGBClassifier
data = load_iris()
params = {
"learning_rate": 0.05,
"n_estimators": 1000,
"max_depth": 3,
"objective": 'multi:softmax',
"num_class": 3,
}
mdl = XGBClassifier(**params)
# ..CV..hyper parameter optimization..mdl.fit(data['data'], data['target'])

API + SKLearn Pipeline

You may want to add ETL or post-processing to the model response as a part of your API. Because MLflow has native support for SKLearn, the easiest way to do this is to put ETL into an SKLearn Pipeline. To do this you need to implement the ETL/post processing as SKLearn Transformers which are compatible with the Pipeline.

Ex: We want our response to be returned in a JSON.

from sklearn.pipeline import Pipeline
from model_format import ModelTransformer, JSONResponse
mdl_api = Pipeline([("mdl", ModelTransformer(mdl)),
("pack", JSONResponse('iris'))])

# output= mdl_api.predict(X)
# {'iris': 0}
# model_format.py
from sklearn.base import BaseEstimator, TransformerMixin
class ModelTransformer(TransformerMixin, BaseEstimator):
def __init__(self, mdl):
self.mdl = mdl

def fit(self, X, y=None):
return self.mdl.fit(X, y)
def transform(self, X):
return self.mdl.predict(X)
class JSONResponse(TransformerMixin, BaseEstimator):
def __init__(self, key_name):
self.key_name = key_name
def fit(self, X, y=None):
return X
def transform(self, X):
result = {self.key_name: X[0]}
return result
def predict(self, X):
return self.transform(X)

Test your pipeline and model locally at this point to make sure you have implemented your transformers correctly.

Note: currently, sklearn does not maintain pandas dataframes throughout the pipeline. If you wish to write ETL code that operates on pandas dataframes, you can find thorough examples here: https://ramhiser.com/post/2018-04-16-building-scikit-learn-pipeline-with-pandas-dataframe/

Save Pipeline as MLFlow Model

MLFlow makes packing SKLearn models very easy. It also supports other frameworks like Tensorflow, R, MlLib…

import mlflowwith mlflow.start_run(run_name='Experiment 1'):
mlflow.sklearn.log_model(mdl_api, 'prod_mdl')
MLFlow displaying your model.

Build a Docker Image

You will likely need to write your own Dockerfile if your model has dependencies that can’t be included by MLFlow’s docker build script. The run script should start the MLFlow model service on the artifact from the previous step.

#!/bin/sh
# run.sh
mlflow models serve -m $ARTIFACT_STORE -h $SERVER_HOST -p $SERVER_PORT --no-conda

Below is an example Dockerfile. The main idea is to copy over the artifact and code, setup your python environment, and use mlflow to serve the model in the docker.

FROM python:3.7.4ARG RUN_IDARG MODEL_NAMEENV SERVER_PORT 5000ENV SERVER_HOST 0.0.0.0ENV FILE_STORE /opt/mlflow/fileStoreENV ARTIFACT_STORE /opt/mlflow/artifactStoreENV PYTHONPATH /opt/mlflow/utils
RUN mkdir -p /opt/mlflow/scripts \&& mkdir -p ${FILE_STORE} \&& mkdir -p ${ARTIFACT_STORE}RUN pip install pandas==0.25.1 \&& pip install scikit-learn==0.21.3 \&& pip install xgboost==0.90 \&& pip install mlflow==1.2.0 \&& apt-get update \&& apt-get install -y git# Copy over artifact and codeCOPY run.sh /opt/mlflow/scripts/
COPY model_format.py /opt/mlflow/utils/
COPY /temp_artifacts/ ${ARTIFACT_STORE}/
RUN chmod +x /opt/mlflow/scripts/run.sh ENTRYPOINT ["/usr/bin/env"]CMD ["bash", "/opt/mlflow/scripts/run.sh"]

Deploy to Kubernetes

Deploy the docker image to Kubernetes and setup a service to expose the pod. The main point is to connect the container port to the same port where mlflow is serving the model.

Deployment yaml
...
containers:
- name: ml_model_in_production
image: ecr.us-east-1.amazonaws.com/repo:v0.0.1
ports:
- containerPort: 5000
Service yaml
...
- port: 8000
protocol: TCP
targetPort: 5000

You can test your end-to-end model at this point. The exposed service will be on port 8000.

import requestsheaders = {"content-type": "application/json", "Accept": "text/plain"}response = requests.post("http://url.us-east-1.elb.amazonaws.com:8000/invocations", data=X.to_json(orient="split"),
headers=headers)
print(response.content)

For more information about how MLFlow serves the model see their documentation: https://www.mlflow.org/docs/latest/models.html

Conclusion

This is a quick way to use a modern data science stack to deploy your machine learning models into production.

Note: the build sequence is long and complicated leaving room for errors in pushing/syncing across all the platforms (git, docker image, kubernetes deployment)

--

--

Paul Bendevis

Data scientist. I’ve had fun working in a variety of roles doing modelling for aviation, insurance, laser design, solar power & water desalination.