Moving some or all of your infrastructure to a serverless stack has many benefits. A few of them include:
- No server management
- Scalability built in
- Response times/speed
- Finite segmentation of code/responsibility
While solving many problems compared to traditional server-based architectures the serverless world is not void of trade-offs. The following is a list of considerations you should weigh carefully before moving large portions of your application to a serverless architecture.
1. Infrastructure setup
It should be no surprise that managing serverless infrastructure would be something that would take some attention and care. For smaller applications or one-off tasks that aren’t mission critical, you can most likely get away with setting up the infrastructure manually. For anything more complicated, or applications that have business/monetary implications for being down, then you would want to automate infrastructure changes so they can be replicated in each environment through traditional build pipelines.
There are a few different applications that can manage this for you but the two most common are Serverless and Terraform. Out of the two I personally prefer Terraform but either will do the job if you put up with their respective nuances. These applications allow you to write your infrastructure stack as configuration files then run them for each environment via the command line. They handle all the API interactions to setup your infrastructure in your provider of choice and allow you to commit your infrastructure to source code to be reviewed — which is invaluable.
2. Developers will need to devop
Traditional techops roles are highly critical to the success of serverless applications. What tends to change, especially when using some of the infrastructure management applications listed above, is that developers will have to understand how to write service configurations from a high-level since they will need to be deploying these services to a development environment to test them out prior to getting an official deploy. This doesn’t mean the developer will have to know all the configurations settings but they should be able to write a basic setup on their own.
Once the initial setup has been tested the developer would then hand the pull request off to a techops team member so they can adjust, massage and fix the configuration settings so staging and production environments will be setup correctly. Since the infrastructure configuration can be checked into source code this can be easily done by peer review and/or direct changes by the techops team member.
The blending of roles from both a developer and techops perspective should be looked at as a positive. For techops it means their work load is reduced since they are not responsible for defining what an application will need and instead focus on what it will take to make that application work successfully in production. For developers it means a more in-depth understanding of what the infrastructure looks like and what settings are needed for successful deploys. Over time as each person gets better at managing the configurations there will be less and less work involved while at the same time discussions between both groups will be normalized since they both have a good understanding of what services are running. It will take time, however, so pad estimates accordingly and make time for personal exploration and learning.
3. Complicated deploys
Your company most likely has build pipelines that deploy to your various environments via a GUI or even through a chat bot, and these pipelines are a very effective way to manage deployments. When you are dealing with serverless setups it becomes much more difficult to manage deployments. Each service has different levels of support for deployment-related features like: canary deploys, blue/green, preview mode and stages. What this means for you is that depending on the service you are deploying the build itself may vary quite a bit from another — and some may require completely custom scripts.
It will be critical that you develop a solid deployment strategy for each service and discuss rollback strategies in case a deployment goes awry. This should be happening at your organization currently but it will play a much more integral role as you bring on more and more services.
4. Development environments
Developers are used to being able to develop locally or on a shared development server. Serverless by nature of being something managed by another company will throw a wrench into that setup. Some services are supported by local execution entirely and some are not. AWS Lambda for example has a very nice companion library LambCI that allows you to execute your code locally using the command line while Google has a robust Emulator to test full cycles locally. Similarly there are mock applications for many AWS services including Dynamo and S3 and there are tools to mock Google Bigquery and Storage, so make sure to explore those. Other services like AWS Transcoder do not have any local equivalent so testing applications that depend on them will be a challenge.
One way to mitigate the increased complexity is to know in advance that exploring a local setup will be something you should do before you dedicate any serious time to writing a serverless service. After all you can ship something to prod without a local environment but as soon as you see errors it will be a very painful process trying to debug that in stage or production. The more time you spend on building solid local environments the easier developing the applications that sit on top of them will be.
5. Logging and tracing
One huge disadvantage of services running on a serverless stack, which is also a problem with microservices in general, is handling logging and tracing when errors occur. The more services you have that interact with each other the harder it is to pinpoint the source of errors when they occur. One very important prerequisite you should focus on is extensive logging in all code deployed and making sure all requests to every service are logged insimilar fashion. If your logging is setup correctly you can create alarms and notifications when unexpected errors occur so you can react swiftly.
Another important idea to explore is inter-service tracing. Most cloud service providers have a service that can trace calls as they pass between your serverless services — in AWS this service is X-Ray while Google has Stackdriver Trace. Having trace data available is another great tool in your arsenal so you can more accurately determine the source of errors. If service B is experiencing errors but it is due to an issue in service A then both services will need to be fixed to address the problem. Without tracing, the issue in service A may remain and you will end up fixing the resulting consequential errors while leaving the source of problem in place to wreak havoc in the future.
6. Loss of control
The last major consideration in moving to a serverless stack is losing control of your systems. When relying on a third-party service for more of your application logic you are at their whim for product features, roadmaps and up-time. Additionally the services you are using can be deprecated outright and you’d be left trying to fill the hole when that happens and it may conflict with your own product roadmap. Giving up control also means that when their service goes down you may be powerless to do anything about it. Unless you pay for a high-level of support then you will not have any physical person to turn to when a service goes down which means all you can do is wait.
At the end of the day there is no one-size-fits-all solution for tech in general so you have to weigh thebenefits of serverless applications over the risks outlined in this article. If you have any reservations you can easily start small and pluck out small pieces of your existing application and move them over to serverless services one at a time. Just make sure you are actually fulfilling a need and not just keeping up with trends. The one constant in tech is that trends continuously change so chasing them without a purpose is a recipe for disaster.