Deploying Machine Learning Models on Kubernetes with Rancher

Navin Chandra
3 min readJul 29, 2022

--

In my previous blogs I made some Machine Learning models for predicting future stock prices. Now I will deploy them on Rancher RKE2 clusters. I will discuss the whole process to deploy a ML model on k8s clusters.

Making a Container from the ML code

Make a Dockerfile inside the folder containing main.py and requirements.txt. Define the Dockerfile like this:

FROM python:3.9-slim
COPY . /app
WORKDIR /app
EXPOSE 8501
RUN pip3 install -r requirements.txt
ENTRYPOINT ["streamlit", "run"]
CMD ["main.py"]

From inside the folder containing the Dockerfile run docker build -t streamlitstockapp:2.0 .

If you want to run the container on your local system: docker run -p 8501:8501 streamlitstockapp and access the app on localhost:8501

Pushing the container to Docker Hub

First tag the container with your docker hub account name:docker tag streamlitstockapp:2.0 navin772/streamlitstockapp:2.0

Login into your docker hub from terminal with docker login

Push the image to docker hub sudo docker push navin772/streamlitstockapp:2.0

Creating yaml files for deploying app on k8s cluster

namespace.yaml

apiVersion: v1
kind: Namespace
metadata:
name: finance

deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: stock-app
name: stock-app
namespace: finance
spec:
replicas: 2 # Creating two PODs for our app
selector:
matchLabels:
app: stock-app
template:
metadata:
labels:
app: stock-app
spec:
containers:
- image: navin772/streamlitstockapp:2.0 # Docker image name
name: stock-preds # POD name
ports:
- containerPort: 8501
protocol: TCP

service.yaml

apiVersion: v1
kind: Service
metadata:
name: stock-app
labels:
run: stock-app
namespace: finance
spec:
type: NodePort
ports:
- port: 8501
targetPort: 8501
protocol: TCP
name: http
- port: 443
protocol: TCP
name: https
selector:
app: stock-app

kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- deployment.yaml
- service.yaml

Add all these files in a folder named yaml_files.

Deploying yaml files on RKE2

Export your kubeconfig file for the cluster you want to deploy the app on.

export KUBECONFIG=~/<cluster>.yaml

Deploy using kustomize kubectl apply -k yaml_files

After sometime (be patient for the deployment) the deployment would be ready. You can visit the app at <NodeIP>:<NodePort>.

Rancher deployments page
2 pods running for the ML app
Services page
App running on the RKE2 cluster
Predictions Made for the Tesla stock

Conclusion

In this way Machine Learning models can be deployed on kubernetes using Rancher. Kustomize makes it easy to deploy all the required yaml files with just one command. We learned how to make a Dockerfile to containerize our Machine Learning application and then deploy it to k8s cluster.

To get the code for this refer my below github repo:

--

--

Navin Chandra

A tech student and an aspiring open-source developer who loves to create software that solves real world problems.