Part 1 — Deep Learning: Scaling your neural networks with containerization and a message broker
Part 1: Using containerization and a message broker to scale your models ( Computer vision example)

Deep Learning if not already is soon to be in all aspects of our lives. Like it or not, it’s a tidal wave of innovation permeating into most technology platforms and applications. Breaking into the world of deep learning as a data scientist or technologist normally starts by cobbling together scripts using popular libraries (NumPy, TensorFlow, Keras, Pytorch, etc) or by using scripts available publicly on GitHub. It involves a lot of experimenting, usually by writing poorly optimized Python or another typeless scripting language by someone new to these technologies.
You have a trained neural network — how do you start using it in your platform?

You now have a model created. How do you get this into your company’s production platform so it can be used by your application, website or software? Your company may use a completely different set of technologies in its stack or not have the technical resources to start using your model in earnest. How do you manage your model in your production platform, roll out new versions of the model, or use the model (which is likely slow) on a large scale?

The Solution —put your model in a container and connect to it via a message broker
Containerization (Docker for example) and a message broker system such as RabbitMQ are ideal ‘free’ solutions to help you organize your models and scale your simple model into something that can be used by your production platform. Containers can be scaled across many servers in the cloud which will enable you to have many servers executing your model in an organized parallel way. The containers will communicate with your production platform via the message broker.

What is containerization? (Docker)
https://www.docker.com/resources/what-container
Your code, the package dependencies it uses, its run-time (Python for example) and anything else needed can all be packaged into a ‘container’ image. This container image can then run as an isolated unit on a server within Docker. The environment needed to execute your model is packaged into the Docker image itself. You can run multiple copies of the same container on one server or across many servers. These container images can be versioned — this provides a very efficient way to manage updates to your models. You can simply deploy and start a new (tested) container image that contains a new model version.
What is a message broker system? (RabbitMQ)
https://www.rabbitmq.com/features.html

A message broker system enables software to publish ‘messages’ to a broker so any software with network connectivity to the broker can see and consume these messages. It’s an easy way to send data reliably between separate pieces of software running on different servers within your platform. Messages sent to the broker can be text based or even binary data such as an object or image.

Software consumes messages from queues - these queues can be configured to be ‘durable’ which will ensure the messages published by a producer are always delivered — even in the event of a server restart. Multiple clients can consume messages from a single RabbitMQ queue in a round-robin fashion — this provides a perfect out-of-the-box solution for scaling many clients to process messages in parallel if a high level of through-put is required (See image above).
Is RabbitMQ fast enough? YES! See how Google pushed it to process 1 million messages / second quite a few years back:
https://content.pivotal.io/blog/rabbitmq-hits-one-million-messages-per-second-on-google-compute-engine
Steps to migrate your model
Step 1 — Black-boxing the inputs and outputs of your model via the message broker
I will show below how you can use RabbitMQ to communicate with your model like a black-box. You will need to modify your model code to accept JSON as the input and then you will return JSON as the output. The model will take its input from a RabbitMQ ‘Queue’ and publish the result to a RabbitMQ ‘Exchange’. I have provided a simple RabbitMQ helper Python library for download which will connect to RabbitMQ for you and manage the connection. Here is an example of how to deserialize/serialize JSON in your model using this library:

The Python package pika is needed to connect to RabbitMQ (add with pip):
pip install pikaSample code using my provided RabbitMQ library is below. You can initialize your model at the start (see where commented) and your model will perform its calculation/work in the ‘callback_on_message_received’ function anytime a message is sent to RabbitMQ to be processed:
A formatted version of the below Python can be found here.
import pika
import json
import random
from RMQ import BasicRMQClientrmq_server = '192.168.56.1'
rmq_port = 5672
rmq_user = 'model_user'
rmq_password = 'm0d3l***'
rmq_virtual_host = '/'
rmq_source_queue = 'queue.model.input'
rmq_completed_exchange = 'exchange.model.output'#######################
# Initialize your model here
######################## currently there's nothing to initialize, but this is where you'd instantiate your model, load weights etc# #######################
# This is a special function that gets called when a message is received on queue.model.input
# Add your model processing code here
# #######################
def callback_on_message_received(ch, method, properties, body):
print('Message received %s' % body)# deserialize the json string into an object
params = json.loads(body)# access the json params like below once it has been deserialized
print('Parameters: %s, %s, %s' % (params['param1'], params['param2'], params['param3']))# perform model calculation here (choosing random value as an example)
output_value = random.randint(1, 5)# what JSON will this model return?
return_json = '{"category": %s}' % output_value# Send the return JSON to RabbitMQ exchange exchange.model.output
rmq_client.publish_exchange(ch, rmq_completed_exchange, return_json)#######################
# Program starts here:
#######################
# Create RMQ client
rmq_client = BasicRMQClient(rmq_server, rmq_port, rmq_user, rmq_password, rmq_virtual_host)# Start processing messages from the rmq_source_queue - this blocks the thread until a message is received
rmq_client.process(callback_on_message_received, rmq_source_queue, rmq_completed_exchange)
Understanding how RabbitMQ ‘Exchanges’ and ‘Queues’ work is a fundamental part of using RabbitMQ correctly. I’d recommend reading this quickly to be sure you understand it.
Once you have the above complete for your model, your existing technology stack or production platform will have to merely send the required input model data to the RabbitMQ exchange and wait for the response to come back on the RabbitMQ queue. I’ll cover RabbitMQ setup and creating the actual exchanges and queues below.
To utilize your model via RabbitMQ from your own production platform, you will need to modify your existing code base to connect to the RabbitMQ message broker to send the input data and listen for the output data.
There are RabbitMQ client libraries for many different technologies: (.Net, Java/Spring, Python, Ruby, PHP, JavaScript/Node.js, C/Swift, Rust, Scala, Groovy, C++, GoLang, iOS/Android, etc..).
https://www.rabbitmq.com/devtools.html
Step 2— Setup RabbitMQ and testing
You will need to setup RabbitMQ on a server that is accessible to your production platform. I’ve provided steps via a script below for Ubuntu 18.04.
Let’s create via a terminal on your RabbitMQ server a RabbitMQ user ‘model_user’ for your model to use:
sudo rabbitmqctl add_user model_user m0d3l***
# It's not advised that you make this user an administrator, but it makes this example simpler
sudo rabbitmqctl set_user_tags model_user administrator
sudo rabbitmqctl set_permissions -p / model_user ".*" ".*" ".*"Now let’s create the exchanges and queues that your model will connect to. We will create these with the RabbitMQ web UI (it will be at http://localhost:15672 on the server hosting RabbitMQ and you can sign in with your model_user/m0d3l*** credentials from above). We need to create two exchanges and two queues.
queue.model.input bound to exchange.model.input
queue.model.output bound to exchange.model.output
The flow of messages will eventually be as per below:

1. Creating the exchanges:
Create exchange.model.input & exchange.model.output. Ensure Type = ‘fanout’ is selected as per below.

2. Creating the queues:
Create queue.model.input & queue.model.output.

3. Binding the queues to the exchanges
Do this for each queue:
a. Select the queue by clicking on it:

b. Bind the queue to the exchange by entering the exchange:
queue.model.input bound to exchange.model.input
queue.model.output bound to exchange.model.output

That’s it! You’re now all set to write input data from your production platform to exchange.model.input, your models running will read messages from queue.model.input which is bound to this exchange. The model will write completed data to exchange.model.output which your production platform can read as messages from bound queue.model.output.
Testing via the RabbitMQ web UI
You can quickly test that your model is connecting to RabbitMQ okay by manually sending JSON via the RabbitMQ web UI to your exchange.model.input. Be sure that your model is set to connect to the correct IP address of your RabbitMQ server in your model code:
...
rmq_server = '192.168.0.29'
...Start your model:
python model.pyBrowse to the RabbitMQ web UI (http://localhost:15672 and sign-in with model_user/m0d3l***):
Select exchanges and select exchange.model.input:

Now create some test JSON and click ‘Publish message’:

If your model is correctly connected to RabbitMQ, you should see it process this message.
Step 3— Creating a Docker image which contains your model and all dependencies

Above you have converted your model to connect to RabbitMQ and to utilize JSON as inputs and outputs. All we have to do now is create a Docker image for this model.
To install Docker Community Edition on Ubuntu 18.04, follow these steps below.
This is a basic overview of some of the functionality of Docker — it’s a powerful platform with many other tools to help you manage and deploy your images & containers. I suggest you take time familiarizing yourself with their well written guides and understanding what’s available.
Once Docker is installed, let’s go over the simple requirements to package your Python model into a Docker image. Then we can create a Docker container from your image.
There is one key file you need to create a Docker image, that’s a file named ‘Dockerfile’. A Dockerfile is a text document that contains all the commands to assemble a Docker image.
An example Dockerfile for our example Python model is:
# Use an official Python runtime
FROM python:2.7# Set the working directory to /app
WORKDIR /app# Copy the current directory contents into an app directory within the container /app
COPY . /app# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt# Run model.py when the container launches
CMD ["python", "model.py"]
pip and Python packages
The above Dockerfile uses a requirements.txt file which contains the Python packages that are needed to run the model. These packages will be downloaded at the time you create the Docker image. This file currently only contains:
pikaSo you should have a directory similar to this containing all the files required for your Docker image. Your model is now ready to be created into a Docker image and container. Files for the image:
Dockerfile
model.py
requirements.txt
RMQ.pyDocker steps to create your image and execute it as a container:
1. Build the image:
# build the docker image (tag it with your version)
sudo docker image build -t model:1.0 .You should see that Docker downloads the Python and pika dependencies and that your Docker image builds successfully.
2. View your built Docker images:
# list all Docker images
sudo docker images3. Create a container from the image you just created:
sudo docker create -i model:1.04. View the Docker containers:
sudo docker container ls --allIMPORTANT: Note the Container Id of the container you wish to execute
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4f1135cfc822 model:1.0 "python model.py" 16 minutes ago Created practical_lumiere5. Start your Docker container (using the Container Id from above):
sudo docker start 4f1135cfc8226. View running containers:
# show running docker containers
sudo docker ps7. View the log of a running container (using the Container Id):
sudo docker logs 4f1135cfc822That’s it, if the steps above worked you now have a running Docker container!
To stop a container (using the Container Id):
sudo docker stop 4f1135cfc822Copying a Docker image from one server to another
To manually copy your Docker image, follow these steps:
#Syntax is: sudo docker save -o <tar file> <image name>sudo docker save -o model_1.0.tar model:1.0
The above will generate a model_1.0.tar file. You can copy this to another server using normal means. Once copied to a different server, load a tar file into Docker as an image with:
sudo docker load -i model_1.0.tarAs said above, this is a very basic way to manage your docker images. Please see Docker Desktop and Kubernetes for other solutions.
Example: Computer Vision — Face detection with age & gender and dog detection with breed prediction.

The above methodology using RabbitMQ and containers can easily be applied to the field of computer vision. Video frames can be pushed directly to a RabbitMQ exchange bound to multiple queues for identification and categorization by ‘n’ models. The results of these models can be published back to RabbitMQ exchanges and then aggregated. See the RabbitMQ setup below where video frames are published directly to RabbitMQ to be consumed by multiple models:

The benefits of the above architecture is that the ‘fanout’ exchanges allow an arbitrary amount of queues to be bound to them. This means that with ease new services can consume and monitor the messages at each exchange. You could for example make a persistence service which saves all data to a DB or a monitoring alarm service that looks for specific scenarios like a dog breed combined with a child or too many people in a room - then notify! It’s an ‘open’ design.
I’ve published the code that makes up the Face Detection Docker image. You’ll see there is a more complicated Dockerfile here containing a ‘make’ command — When the Docker image is created a component is compiled from source. Also, environment variables can be set in the Dockerfile to be accessed by your executing code in the Docker image.
# Use a Python runtime
FROM python:2.7# Set the working directory to /app
WORKDIR /app# Copy the current directory contents into the container at /app
COPY . /app# Need cmake to compile dlib
RUN apt -y update
RUN apt install -y build-essential cmake pkg-config
RUN apt install -y libx11-dev libatlas-base-dev
RUN apt install -y libgtk-3-dev libboost-python-devENV rmq-server 192.168.0.29
ENV rmq-source-exchange queue.frames.source.face-detection
ENV rmq-completed-exchange exchange.object.face.source# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt# Run app.py when the container launches
CMD ["python", "face_detection.py"]
Requirements.txt (Python packages downloaded by pip into the Docker image):
numpy
scipy
matplotlib
scikit-image
opencv-python
pika
face_recognition
argparse
pysmile
pybase64The entire code base for this face detection Docker image, the .Net webcam code and .Net Core aggregation code is here in Github. I haven’t included the dog detection nor the age and gender model implementations —contact me if you have any questions.


The face recognition model is a separate project here and downloadable directly by pip (‘pip face_recognition’) to by used by Python.
Note: When using multiple containers on the same host that run models utilizing the GPU, you may run out of GPU memory very quickly. In this case, you’ll have to run the containers on separate hosts.
Appendix 1: Setting up RabbitMQ on Ubuntu 18.04 (Bionic Beaver)
RabbitMQ needs Erlang to be installed — the script below covers this off:
#############
# PART 1: Install the latest Erlang
############## Import the Erlang repo key
wget -O- https://packages.erlang-solutions.com/ubuntu/erlang_solutions.asc | sudo apt-key add -# Add Erlang repository
echo "deb https://packages.erlang-solutions.com/ubuntu bionic contrib" | sudo tee /etc/apt/sources.list.d/rabbitmq.list# Install Erlang
sudo apt update
sudo apt -y install erlang erlang-nox#############
# PART 2: Install RabbitMQ
############## Add RabbitMQ repo keys
wget -O- https://dl.bintray.com/rabbitmq/Keys/rabbitmq-release-signing-key.asc | sudo apt-key add -
wget -O- https://www.rabbitmq.com/rabbitmq-release-signing-key.asc | sudo apt-key add -# Add RabbitMQ repository
echo "deb https://dl.bintray.com/rabbitmq/debian $(lsb_release -sc) main" | sudo tee /etc/apt/sources.list.d/rabbitmq.list# Install RabbitMQ server
sudo apt -y install rabbitmq-server# If the erlang version isn't late enough because apt install didn't find it,
# you can download and install erlang manually from (but first remove the current version):
# sudo apt -y remove erlang erlang-nox
# Download manually from here:
# https://www.erlang-solutions.com/resources/download.html#############
# PART 2: Configure the RabbitMQ instance
#############sudo service rabbitmq-server start
# Enable the web plugin for RabbitMQ management
sudo rabbitmq-plugins enable rabbitmq_management
sudo service rabbitmq-server restart
# Visit web admin at: http://localhost:15672
The default username and password for RabbitMQ is guest/guest. You can sign into the web UI with this once you have enabled the web management plugin as above.
Appendix 2: Setting up Docker Community Edition on Ubuntu 18.04 (Bionic Beaver)
#############
# Install Docker Community Edition
#############sudo apt update# remove any old versions of Docker
sudo apt remove docker docker-engine docker.io# install Docker
sudo apt install -y docker.io# setup to start at system startup
sudo systemctl start docker
sudo systemctl enable docker
