AWS Deep Learning Containers Are Awesome šŸ™Œ

wrannaman
SugarKubes
Published in
4 min readApr 8, 2019

This post is sponsored by SugarKubes, a container marketplace. Want to start running AI at the edge?Need some sweet machine learning models that work out of the box? Check us out at https://sugarkubes.io.

Letā€™s spin up and deploy a deep learning model for inference using AWSā€™s new Deep Learning Containers. If you havenā€™t heard of the new containers you can learn more here but the

TLDR; Theyā€™re AMIā€™s with sensible preinstalled stuff that generally sucks to set up. (Iā€™m looking at you CUDA).

Hop into the AWS console and click ā€œLaunch Instanceā€ selecting the Deep Learning AMI (Ubuntu) v 22.0

The instance type weā€™re going to be selecting is a g3s.xlarge. The g3s.xlarge is the cheapest youā€™re going to get, it comes with 11.75 ECUs, 4 vCPUs, 2.7 GHz, Intel Xeon E5ā€“2686 v4, 30.5 GiB memory, EBS only. Itā€™s sporting an NVIDIA Tesla M60 GPU which will be enough for us today.

Make sure you add a little extra storage. These images use 70GB out of the box šŸ™Š.

The rest of the setup is the same as any other ec2 instance, if youā€™re unsure of how to start an ec2 instance and connect to it, go do yourself a learn and come back.

Once youā€™re connected, letā€™s hop in the server and see what weā€™ve got!

šŸ„³

Cuda is working! Great! Now letā€™s install nvidia-docker 2 and pull a deep learning image!

# install docker 
sudo apt update
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
# Install nvidia-docker2
sudo apt install nvidia-docker2
# kill the docker daemon to reload nvidia-docker configs
sudo pkill -SIGHUP dockerd
# pull a test image to make sure it's working
docker run --runtime=nvidia --rm nvidia/cuda nvidia-smi

I swear I spun up one instance and docker.io worked, and another it didnā€™t šŸ¤·ā€ā™€ļø.

Anyways, the output should show us our familiar nvidia-smi details.

nvidia-smi working from docker!

Now for the fun partā€¦. letā€™s deploy an object detector!

At this point, Iā€™m going to use a SugarKubes object detection model. Head over to SugarKubes to sign up and learn more about accessing this and other awesome containers. This one will be loading a darknet model trained on the open images dataset v4, it boasts 600 object classes!

docker login registry.sugarkubes.iodocker run \
--runtime=nvidia \
-ti \
-p 8080:8080 \
-e PORT=8080 \
registry.sugarkubes.io/sugar-cv/object-detection:gpu-run-cuda-9.2

Donā€™t forget to open up port 8080 in your security group so this can be accessed from the outside world

Now letā€™s spin up a local resource to hit our new API.

# This one is public, no login required. Run this on your machine, or in the new ec2 instance. If you run it on the ec2 instance make sure to open the port up so you can access it.docker run -ti -p 3000:3000 -e PORT=3000 sugarkubes/testing_ui:latest

Visit localhost to see the UI, make sure to change the URL to the public IP of your new ec2 instance

It works! and not bad inference time!

Overall I think these AMIs are great. Theyā€™re a little bloated (they literally come with everything you could need) but storage is cheap and Iā€™ll pay for the extra disk space if it means I donā€™t have to spend half an hour setting up a new GPU machine each time I need to deploy a new GPU container.

Join our mailing list for updates, free code, and more!

Sugarkubes is a container marketplace. Want to start running AI at the edge?Need some sweet machine learning models that work out of the box? Check us out at https://sugarkubes.io.

--

--