This article is part of a series:
- Part One — review your existing application (Wed. 09/18)
- Part Two — break apart your application (Fri. 09/20)
- Part Three — lift-and-shift (Wed. 09/25)
- Part Four — choose a deployment framework (Fri. 09/27)
- Part Five — build a strong foundation (Fri. 10/11)
In this series, we take a look at what areas you need to think about when making a “serverless migration”. First, we will touch on what you should review about your existing application, move into breaking your application apart, talk about lift-and-shift, choose a cloud provider, choose a deployment framework, and how to build a strong foundation.
Related reading before diving into this:
- Serverless Impact — What is serverless, less overhead
Let’s start the series off by taking a look into how your existing application is setup.
Review your existing application
We want to start off by reviewing how your application is configured and currently running. This is to determine the level of ephemeral-ness and see what needs to be refactored and what doesn’t.
When working with serverless, we are operating at a level where all transactions through our system should be ephemeral. Meaning we are not keeping any state, e.g. not saving files onto the server running our code. Instead, we are relying on other services to handle those operations. Let’s look at some options.
If your application code is handling files and writing or reading directly from the server. This is going to need to be changed because when you leverage cloud functions versus a traditional virtual machine or container. The server hosting your cloud function and your application code will at some point shut down.
Therefore, we want to create a system which stores files outside of the server running our application code. This gives us the ability to ensure that nothing is ever lost and will set us up to make the move to serverless.
Every cloud provider has some type of cloud storage, which will be sufficient for storing images, documents, etc. that your application generates.
If you were relying on a local cache on your virtual machine or container, then you will need to find an alternate datastore. The datastore will work with your cloud function and make sure that across invocations you are keeping track of whatever you need too.
Here are some options if you’re on AWS. We talk about databases in a bit more detail further down in the article as well.
- AWS ElastiCache — Fully managed Redis or Memcached
- AWS DynamoDB — Fully managed NoSQL Database
When it comes to working within this new environment, you’re going to need to adjust the top layer of your API code. You can think about this as the entry point to all the rest of your functionality.
Due to the fact that your cloud function is going to be hosted and fully managed by a cloud provider. You need to draw within the lines. Your cloud function will be invoked in numerous different ways, but the request will always contain the same core arguments. This standardization is great, but it’s something that needs to be considered.
Scaling, avoiding bottlenecks
When moving to serverless, you want to make sure that all layers of your new architecture can scale to the same degree as your cloud functions.
What can easily happen and does happen when companies migrate to serverless is they run into scaling issues as their supporting services end up becoming bottlenecks.
If you are leveraging cloud functions for your compute layer, but still have dependencies on other services which are not able to match that horizontal scaling, you can quickly run into issues.
For example, if your company hand-rolled its own queuing system, you could now make the move to AWS SQS and upgrade your hand-stitched running shoes for some fresh new Nike’s.
Reduce execution time
If you’re already familiar with cloud functions then you most likely know that “long-running” processes are not the best fit. At least on the surface.
For example, AWS Lambda has a max timeout of 15 minutes which can be restrictive and make companies hesitate to move to serverless because they have APIs which run for 30+ minutes.
However, it’s possible to even make a 30+ minute process work with cloud functions. For example, you could break apart a long-running process into smaller chunks and leverage a queue service like AWS SQS you can handle reduce the total execution time to a minute or less, spread across multiple invocations.
In the next article in this series, we dive into the topic, “breaking apart your application”, and we go into more detail on how you could break apart a long-running process.
Leverage fully managed services
An area which provides immediate benefit when migrating to serverless is the ability to now tap into fully managed services and keep the focus on the product you’re building.
If you have already been running on the cloud, then this may be less relevant, however, if you are coming from on-premises and making the leap straight to serverless then you now have the option to throw away some fragile code. Instead of what a lot of companies do when they migrate from on-premises to the cloud. Companies will carry over fragile code or internally created services into the new environment and not invest enough time in alternatives which would ultimately save money/time over the long run.
From my experience, most of the use-cases we’ve come across align with some kind of fully managed service from one of the major cloud providers.
Fully managed machine learning
For example, let’s take a fun example. If your application handles computer vision. There are lots of options across cloud providers to handle these use-cases.
- AWS Rekognition — simple API, no ML knowledge required
- Google Vision API — simple API, no ML knowledge required
The fully-managed services listed above handle object detection, face detection, text in images, and a few other use-cases without needing to know anything about machine learning.
A more practical example would be the database backing your application. Moving away from a hand-spun MySQL database to something like AWS RDS MySQL will result in a large reduction in overhead and ultimately TCO (Total Cost of Ownership).
Fully managed NoSQL database
However, there are some downsides to AWS DocumentDB. The biggest one is the cost. Although, the service is “fully managed”, if you’re a smaller company or a startup, you could easily get by with a far less powerful database then the DocumentDB minimum size which comes out to around $200/mo.
If you’re looking to jump completely away from MongoDB, then the first alternative we would suggest would be AWS DynamoDB if on Amazon. If you’re on GCP then we would suggest Firestore. The great thing about Firestore is that it follows the same terminology as MongoDB, making it an easier transition.
At Serverless Guru, we typically lean towards DynamoDB as our go-to NoSQL fully managed database when building serverless applications.
What doesn’t need to be refactored
Although there are many moving parts which need to be thought about from a different perspective, you don’t have to throw everything away. The logic and functions which have been driving your application thus far can mostly be ported.
There may be something which you want to refactor based on now having the ability to better hook into cloud services from your cloud functions, but it’s not a requirement.
In the next article, we will talk in more detail about this scenario and go through how you can break apart a 30+ minute process to cover your edge cases while doing a serverless migration.
- Industry Predictions for 2019
- Best Practices for Serverless Development
- Serverless CI/CD
- Serverless Web Applications — AWS v GCP
- Serverless Impact, Developer Velocity
- Guide, First Serverless Project
What does Serverless Guru do?
Serverless Guru helps companies build scalable and cost-effective applications on the cloud. We help train companies on how to leverage IAC, serverless, and cloud-native services. We help migrate existing applications to the cloud and optimize existing applications on the cloud to be more cost-effective. We are a Serverless development partner and an AWS Consulting partner.
What did we miss?
When you leave your answer make sure to either comment below or tweet your answer to @serverlessgurux on Twitter.
Founder & CEO — Serverless Guru
LinkedIn — @ryanjonesirl
Twitter — @ryanjonesirl
Thanks for reading 😃