From Django to Serverless

A modernization story, and what we learned

Nicolò Gasparini
Feb 24 · 6 min read

This is a story about how an application that was developed the same “old fashioned” way was ported in a completely serverless app, as it was supposed to be from the beginning.

Before you start reading, I just want to clear up that not every Django application can be shifted to serverless, it takes specific use cases, or you could port a part of it, if you think it’s worth the effort. Our original Django application was particularly suitable because it had no front-end, no user login, and just some (ten or so) APIs.

The original architecture

As many software companies out there, you have tools, languages and framework that you are used to, and you over-utilize them for everything.
That was Django for us, it is an incredible framework to build web applications and it has many qualities, its ORM (Object Relational Mapper) to cite one.

But you might realize, especially when you learn about new technologies, that that one project where you used Django didn’t actually need any of its features.

Our application was very simple: a sort of middleware that would collect tasks to do from many different clients. It would then connect to an external service that would send back the results.
The followed steps are summed up in the picture below:

The older, Django-centric, architecture

This worked fine for a couple years, until the requests from the clients became too many. Our Django app became our bottleneck, it wasn’t capable of scaling up.


Approaching the revolution

The fact that Django probably wasn’t the best choice was clear by now. We wanted to try out the serverless paradigm and this was the perfect opportunity!

If you need scalability, a managed, preferably serverless, approach is the way to go.

The serverless framework

We started out by searching for a framework that would help us starting the job in an organized and structured way. Serverless.com was our choice, because of its popularity and ease of use, even though we encountered some bugs and problems, in 2019 it was already a stable and complete enough framework, and it’s being updated constantly.

Core functionalities

The main job of the application was to read the jobs saved in the SQL database, interact with the external service and store the results in a NoSQL database.

First of all, we moved the tasks storage from SQL to AWS DynamoDB, with every row corresponding to a single task, creating this way a very simple Finite State Machine, with a simple STATUS field representing the state of the task, from RECEIVED, to IN_PROGRESS, to DONE.
Every task also had a Time to Live attribute, updated once done, so that we would keep the result for a while and not worry about deleting it when no longer needed.
In order to work with DynamoDB from Python, we used the library PynamoDB, it didn’t substitute Django’s ORM, because of the different logic and syntax, but it helped interacting with the database.

Next, we moved every Django command, function and core functionality in separate Lambdas, the most important thing was to apply the KISS principle (Keep It Simple Stupid), every Lambda performs one action only, could it be reading from the database, getting a request from the clients… everything must be separated, and every Lambda will be able to call other Lambdas if it needs to perform multiple operations. Also, whenever possible, call Lambdas asynchronously, so that you don’t need to wait for the response.

This also gave us an opportunity to do some code refactory, improving the existing code and, most importantly, removing every heavy, overkill, external library that was used and abused, such as Pandas. We opted for a cleaner, basic approach, using whenever possible the python standard libraries.

Rest APIs

After the core functionality was moved, only the APIs that the clients and the external service used to call our application remained to transfer.
For this task, in the older application we used Django Rest Framework, whit every URL corresponding to a Django View. We simply needed to shift every view in a separate Lambda, and change every bit of code that used old methods, such as database reading, to a new call of another Lambda dedicated to that.
In order to avoid code duplication, a shared module was created to manage lines of code that every lambda function should access to, and then easily uploaded as a common lambda layer. I recommend you take a look at the packaging documentation from serverless.

Monitoring

Now that every bit of code was shifted online, only accessories functions remained.
For monitoring purposes, we had some cron commands running in the older server that would tell us how many tasks were running, in which state were they and more information, we moved every one of these metrics in AWS Cloudwatch. Many metrics were actually available by default, such as the number of tasks stored in DynamoDB or the number of clients invocation, derived by the number of times a Lambda was called, or even the numbers of errors occurred.
For metrics that weren’t already obtainable by the AWS services, we established another Lambda that would be scheduled to run, collect these metrics and push them on Cloudwatch.

A part of the Cloudwatch dashboard to keep track of the system

The new architecture

A newer, serverless architecture

After everything had been moved and deployed in production, we had a situation similar to the one in the picture.

The architecture remained similar, but every task made by Django commands and APIs had been divided in multiple Lambdas that called each other, directly, through DynamoDB streams or passing through AWS SNS (Simple Notification Service).

SNS was used as a publisher/subscriber queue method, where every action to make was enqueued in a topic and consumed by the appropriate Lambda.


What we learned

After the whole project rolled out in production we had some minor bugs to fix but the application went live pretty seamlessly. The pricing was just a bit less that what we were spending in servers and it could finally scale as much as needed.

Here are some recapped experiences we made along the way that I hope could help others approaching the serverless world for the first time:


A special thanks to Federico Oppi, that worked with me on the project and revised this article

Thanks to Federico Oppi

Nicolò Gasparini

Written by

Software Engineer and full stack developer 💻 based in Italy — /in/nicologasparini/

More From Medium

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade