DOOM On Demand: Running Graphical Containers in AWS

Mike Gray
Mike Gray
Jul 8, 2019 · 8 min read

I’ve been obsessed with the idea of running Docker containers that have a GUI for a few months, but didn’t have the time to dig in deeper. Recently I had some free time so I used it to research and have some fun. To test and develop this little solution I used a classic game (and one of my favorites): id Software’s Doom.

This article will step you through the process I took to accomplish this and lay the foundation for your own Doom-as-a-Service offering, if you so desire. You’ll need a few prerequisites:

  1. Docker, and a container registry such as Docker Hub
  2. An AWS account (everything here should be within the limits of a free tier account)
  3. The Doom binaries and config files
  4. Some HTML/JavaScript/jQuery/Python skills

Docker Container

First thing’s first: how do we get this thing to run in a container? I tried a few different methods, but the easiest was based on this article: https://www.wintellect.com/docker-fueled-nostalgia-building-a-retro-gaming-rig-on-kubernetes/

It requires that you have the Doom binaries and config files in a folder called “doom” in the same directory as your Dockerfile. I’m not going to provide those — hopefully you already have a legally purchased copy to use. Note that the default user and password are not production-ready, but for our POC purposes, they are fine. I also had to tweak the dosbox.conf file in order to get it to automatically run Doom, since I didn’t want to hassle with executing it from the virtual DOS command line in my browser. Those days are long gone!

The functional Dockerfile is below:

FROM ubuntu:18.04ENV USER=root
ENV PASSWORD=password1
ENV DEBIAN_FRONTEND=noninteractive
ENV DEBCONF_NONINTERACTIVE_SEEN=true
COPY doom /dos/doom
RUN apt-get update && \
echo "tzdata tzdata/Areas select America" > ~/tx.txt && \
echo "tzdata tzdata/Zones/America select Los_Angeles" >> ~/tx.txt && \
debconf-set-selections ~/tx.txt && \
apt-get install -y tightvncserver ratpoison dosbox novnc websockify
RUN mkdir ~/.vnc/ && \
mkdir ~/.dosbox && \
echo $PASSWORD | vncpasswd -f > ~/.vnc/passwd && \
chmod 0600 ~/.vnc/passwd
RUN echo "set border 0" > ~/.ratpoisonrc && \
echo "exec dosbox -conf ~/.dosbox/dosbox.conf -fullscreen" >> ~/.ratpoisonrc && \
export DOSCONF=$(dosbox -printconf) && \
cp $DOSCONF ~/.dosbox/dosbox.conf && \
echo MOUNT C: /dos >> ~/.dosbox/dosbox.conf && \
echo C: >> ~/.dosbox/dosbox.conf && \
echo CD DOOM >> ~/.dosbox/dosbox.conf && \
echo DOOM >> ~/.dosbox/dosbox.conf && \
sed -i 's/usescancodes=true/usescancodes=false/' ~/.dosbox/dosbox.conf && \
openssl req -x509 -nodes -newkey rsa:2048 -keyout ~/novnc.pem -out ~/novnc.pem -days 365 -subj "/C=US/ST=NY/L=NY/O=NY/OU=NY/CN=NY emailAddress=email@example.com"
EXPOSE 6080CMD vncserver && websockify -D --web=/usr/share/novnc/ --cert=~/novnc.pem 6080 localhost:5901 && tail -f /dev/null

Once I saved that as Dockerfile and placed it in the directory above my “doom” directory, I used the following commands to build and execute it locally:

docker build . -t docker-doom
docker run -p 6080:6080 --name docker-doom docker-doom

At this point, you can browse to http://localhost:6080/vnc.html and use your root credentials from the Dockerfile to load it up. The game runs!

Where To Run It

After getting the Docker container running locally, I realized that I wasn’t quite happy. I wanted to take it several steps further and create a push-button deployment of Doom that would run in my browser. I also wanted to try to do it serverless, wherever possible. Since I use AWS extensively for work, I looked there for answers.

My initial plan was to store the container in Elastic Container Registry (ECR) and run it in Fargate, but I didn’t want to incur the charges, reasonable though they may be. I decided to see if I could run the container on a t2.micro Elastic Cloud Compute (EC2) instance instead. Using Amazon Linux, it was indeed possible to run a Docker container, although the performance suffered on such an underpowered machine. I highly recommend using a larger instance if you actually want to play the game for any length of time. For the purposes of a POC, however, t2.micro will work just fine.

AWS EC2/Lambda/API Gateway

Now that I knew EC2 would work, I wanted to set up an API endpoint that I could trigger from a button in a web front-end. For that, I used AWS Lambda in conjunction with AWS API Gateway. I created an IAM role called Doom that uses the out-of-the-box AWS Permissions Policy AmazonEC2FullAccess. I could have used a more custom permissions policy to follow the principle of least privilege, but again, this works for a POC.

Next, I uploaded my container image to Docker Hub. Then, I created a Lambda Function using the Python 3.7 runtime and assigned it the Doom IAM role I just created. I kept the default lambda_function.py name and used the following code. YMMV, so feel free to edit for your particular use case:

"""Lambda to launch ec2-instances.
Template adapted from:
https://medium.com/tomincode/launching-ec2-instances-from-lambda-4a96f1264afb
"""
import boto3
import json
import time
from collections import namedtuple
REGION = 'us-east-2' # region to launch instance.
AMI = 'ami-0d8f6eb4f641ef691'
# matching region/setup amazon linux ami, as per:
# https://aws.amazon.com/amazon-linux-ami/
INSTANCE_TYPE = 't2.micro' # instance type to launch.
EC2 = boto3.client('ec2', region_name=REGION)def lambda_function(event, context):
""" Lambda handler that spins up an instance of Doom """
# bash script to run:
# Update packages
# Install Docker
# Add ec2-user to the docker group
# Pull down the Doom image and run it
# Set to shutdown the instance in 60 minutes.
init_script = """#!/bin/bash
sudo yum update -y
sudo yum install docker -y
sudo systemctl start docker
sudo gpasswd -a ec2-user docker
sudo docker pull me/docker-doom:latest # Change to your Docker image
sudo shutdown -h +60
sudo docker run -p 6080:6080 me/docker-doom
"""

SecurityGroupRule = namedtuple("SecurityGroupRule", ["ip_protocol", "from_port", "to_port", "cidr_ip", "src_group_name"])
doom_rule = SecurityGroupRule("tcp", 6080, 6080, "0.0.0.0/0", "doom")
print('Running script:')
print(init_script)
try:
doom_security_group = EC2.describe_security_groups(GroupNames=["doom"])
if doom_security_group['SecurityGroups'][0]['IpPermissions'][0]['FromPort'] != 6080:
print("Adding TCP/6080 to ingress rules.")
EC2.authorize_security_group_ingress(IpProtocol=doom_rule.ip_protocol,
FromPort=doom_rule.from_port,
ToPort=doom_rule.to_port,
CidrIp=doom_rule.cidr_ip,
GroupName=doom_rule.src_group_name)
except boto3.exceptions.botocore.client.ClientError:
print("Security group for Doom does not exist. Creating group.")
doom_security_group = EC2.create_security_group(GroupName="doom",Description="Doom VNC Ingress")
EC2.authorize_security_group_ingress(IpProtocol=doom_rule.ip_protocol,
FromPort=doom_rule.from_port,
ToPort=doom_rule.to_port,
CidrIp=doom_rule.cidr_ip,
GroupName=doom_rule.src_group_name)
instance = EC2.run_instances(
ImageId=AMI,
InstanceType=INSTANCE_TYPE,
MinCount=1, # required by boto, even though it's kinda obvious.
MaxCount=1,
InstanceInitiatedShutdownBehavior='terminate',
UserData=init_script, # file to run on instance init.
SecurityGroups=['doom'],
KeyName='doom'
)
print("New instance created.")
instance_id = instance['Instances'][0]['InstanceId']
print(instance_id)
data = EC2.describe_instances(InstanceIds=[instance_id])
while data['Reservations'][0]['Instances'][0]['PublicDnsName'] == '':
time.sleep(5)
data = EC2.describe_instances(InstanceIds=[instance_id])
public_dns = data['Reservations'][0]['Instances'][0]['PublicDnsName']
print("Public DNS address: {}".format(public_dns))
new_instance = {'statusCode': 200,
'headers': {'Access-Control-Allow-Origin' : "*"},
'instanceId': instance_id,
'PublicDnsName': public_dns}
return new_instance
# this just takes whatever is sent to the api gateway and sends it backdef lambda_handler(event, context):
try:
return lambda_function(event, 200)
except Exception as e:
return 'Error ' + e, 400

I had previously created a key-pair called doom and it is referenced in this Lambda script, so make sure you have one as well or change that value to your own key-pair in the script. Just in case my public endpoint got out, I wanted to have provisions to automatically remove these instances over time. I set the shutdown behavior to terminate and ran sudo shutdown -h +60 to allow only an hour of playtime before the instance shuts down and thus terminates. I also didn’t want to hassle with security groups so this Lambda code will create a Security Group called doom with inbound TCP port 6080 open, if it doesn’t already exist.

Finally, I created an API Gateway that has CORS enabled. In a production environment I would lock that down to a specific domain, but for this POC I enabled it universally. It’s a simple gateway with just a / endpoint that responds to a GET request. The specific configuration of an API Gateway is out of scope for this article, but the AWS documentation is quite good, and Lambda offers wizard-driven API Gateway creation.

The Front End in S3

Now that I had a functioning way to spin up a Doom container on demand, it was time to create the webpage to do it. I started with a very bare-bones index.html:

<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Play Doom</title>
</head>
<body>
<div type="container" id="container">
<button type="submit" class="btn btn-primary btn-md" id="getDoom">Spin up Doom</button></div>
<script src="https://code.jquery.com/jquery-2.2.4.min.js"></script>
<script src="scripts.js"></script>
</body>
</html>

Next, a little bit of JavaScript and jQuery in scripts.js to perform the calls and wait until the EC2 instance has an Elastic Public IP assigned to it before returning a link:

const container = document.getElementById('container')
function getDoom() {const header = document.createElement('h1')
header.textContent = "Spinning up instance, please wait..."
container.appendChild(header)
$.ajax({
url: 'https://your-api-gateway-endpoint.us-east-2.amazonaws.com/doom',
contentType: "application/json",
dataType: 'json',
success: function(result){
console.log(result)
// Create a div with a card class
const card = document.createElement('div')
card.setAttribute('class', 'card')
// Create a header
const h1 = document.createElement('h1')
h1.textContent = "Doom Link"
// Create the text with the appropriate link
const a = document.createElement('a')
link = "http://" + result['PublicDnsName'] + ':6080/vnc.html'
a.setAttribute('href', link)
a.innerHTML = link
// Append the cards to the container element
container.appendChild(card)
// Each card will contain an h1 and an a
card.appendChild(h1)
card.appendChild(a)
}
})
}
$("#getDoom").click(function(){getDoom();});

It’s not the prettiest site I’ve ever written, but it does spin up Doom as many times as you request it. Don’t forget to change the $.ajax() url value to your own API Gateway endpoint!

Where To Go From Here?

Besides the obvious need to write a prettier front end (this is 2019, after all), there are a few next steps to explore for the enterprising cloud gamer:

  1. Use a more performant EC2 instance type, or switch to AWS Fargate to spin up Doom containers on demand.
  2. Leverage Docker volume mounts and Elastic Block Storage (EBS) and/or S3 buckets to allow for save games. As it stands, you have to start all over again each hour.
  3. Try it out using another classic game, like Commander Keen (Love me some Keen!)

Happy gaming!

Mike Gray

Written by

Mike Gray

Mike is a cloud consultant, a father, and a musician. All opinions are my own, not those of my employer.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade