An AWS GameDay experience

Keir Williams
Version 1
Published in
3 min readMay 16, 2023

Members of Version 1’s AWS DevOps team took part in the AWS GameDay. As one of the participating team members, I will be sharing our experiences.

The scenario

AWS introduced a fictional company called Unicorn Rentals.

Unicorn Rentals want to migrate from legacy services to something faster and more reliable. Their Chief Technology Officer (CTO) read about microservices and decided this is the direction they wanted to go in. To prevent any downtime, they adopted a microservices-mesh where separate DevOps teams run the same sets of microservices. The catch is, their DevOps team all quit last week. Unicorn Rentals is relying on their new staff hires- us!

The setup

AWS provided participants with access to an AWS Management Console, fully provisioned by AWS themselves with all the simulated services. The infrastructure included some of these services:

  • API Gateway
  • CloudFormation
  • DynamoDB
  • Elastic Beanstalk
  • Elastic Compute Cloud (EC2) with Auto Scaling
  • Elastic Container Service (ECS) with Fargate
  • Elastic Load Balancing
  • Lambda
  • Serverless Application Model (SAM)

Further to the infrastructure AWS provided, we also had some collaboration tools to hand. AWS briefed participants via Cisco Webex. We were then handed over to AWS’s own AWS Chime cloud video conferencing, in our own room for our team to work together. AWS representatives were able to drop into this room at any time to check-up on us and help if we needed it.

AWS also provided a live points board for every participating team. You could see how you were doing. This included a “trend” in the form of a positive or negative of your current average point accumulation.

All-in-all, a well-rounded set-up was provided by AWS. There wasn’t much else extra required of us, other than a local Linux environment.

The games

The games started with a ready, steady, go and a README. Our team started by reading through the README together and testing the various microservices. Once we began to understand the organisation of the infrastructure, we moved on to calling the other teams’ microservices.

AWS calculated scores based on the accuracy of our responses, the speed of the responses (latency), and the number of different teams’ services we called. Negative points were also possible if your infrastructure started failing.

We had several unforeseen issues occur during the games. Some of these included patching the microservices and service router, launching templates requiring amendments and then reversions, and issues with security groups. AWS simulated these issues and so we had to watch our dashboards for them to happen at any time.

Our thoughts

Let’s start with what we felt needed improvements. We felt AWS could have sent the introductory content a day earlier. We spent the beginning of the GameDay trying to get the video conferencing working and setting up our environments. We felt we lost time on set-up. We also found AWS Chime to be unreliable with screen sharing, sometimes failing for recipients of the share. We can’t be sure if this is due to AWS Chime or our set-ups.

We found the GameDay to be a unique experience with interesting challenges. One such interesting challenge was the balance of optimising the Load Balancer/Service Router vs optimising the latency/accuracy of our microservices. There was a low emphasis on the development side of the infrastructure, it was more about troubleshooting production issues. This is what we’d expect from a day. The meaning of a game day is to test a team’s ability to respond to arising issues and effectively mitigate them. The AWS GameDay provides practically applicable training and gameifies this making it a great team-building exercise.

We found the AWS GameDay a great experience and we would be keen to take part again.

About the author:
Keir Williams is an Associate AWS DevOps Engineer here at Version 1.

--

--

Keir Williams
Version 1

DevOps Engineer at Version 1. Excited about anything Linux.