The easiest way to sum up serverless is that it’s a solution where you don’t have to worry about the servers running behind your application or architecture and as far as you’re concerned there are no servers, thus “serverless”. This allows you the benefit of being able to focus on functionality to the end user and not have to focus on the complexities of what goes into the decisions of the infrastructure build and considerations.
Firstly serverless isn’t an option for every situation, however for brand new projects without any previous dependancies then serverless is definitely worth reviewing. It also brings a whole new element to the table where building a fully functional architecture is possible without having to worry about scalability, availability, cost or performance.
Personally I’ve found the best use cases of serverless are when I’m working on projects where either the load is unpredictable or low load, serverless essentially scales cost to load. I have found this to be one of the key advantages where the cost implementation of serverless has the most bang for the buck. You can build an entire serverless infrastructure backed with a fully scalable and microsecond latency database, storage, massive data processing, cacheing, securely protected objects API layer, compute power for cents per month. In fact we’ll do just that now and go through what’s possible with serverless, as I currently work within AWS architecture I’ll be using that as our example on what’s possible.
The below architecture costs $0.07 per month implementing a fully functional serverless architecture I have built and designed both for massive scale and minimal cost. As a comparison a million calls per month would equate to around $50, this is only an example and as you build a solution with various requirements and steps there can be additional charges depending on what your needs are, but this does show you it is very possible to build solutions now with no overhead on low volume which is fantastic for prototyping and testing the market.
There are plenty more AWS serverless technologies available but the below outlines the services that were used to build this fully functional scalable architecture out shown above
- DynamoDB (NoSQL) for storing key-value data with single digit millisecond latency that is fed in from an S3 Hosted Front-end
- 3 S3 Storage Buckets (Web Hosting, Processing Files and Private Download Content)
- 5 Lambda Functions for handling communication between API gateway and backend compute to DynamoDB and Athena processing and pre-signed url secure access to content
- API Gateway for handling RESTful interactions
- Athena as an SQL data processing engine on top of data lake data via a hive layer that is fully capable of processing big data level content
- Glue for building a data catalog on top of the data being processed for Athena for a crawler based schema discovery
- CodeCommit for code repository using Git
- CodePipline for deploying web hosted content code back to S3 with code is committed automatically as part of a CI/CD process
- CloudFront for HTTPS hosted layer on S3 including a cacheing layer
Outside of the demonstrated architecture here is also the functionality of logging of each interaction on this service, this allows for monitoring user activity and any required actions needed, for example impost limitations at the API layer by user or audit logging for debugging or governance of data along with many additional services or flows I could also implement on top of this.
Optimized setup have very efficient financial commitments, this means that the cost will scale with your user load, so if you have no users it will essentially cost you nothing or very little, however if you have millions of users you will pay per use on a serverless solution. This means it perfectly fits a SaaS product model.
Focus on Features and Functionality
Due to the fact that you don’t have to worry about scaling the services or handling hardware failures means that you can put all of your time and effort into the actual features and functionality that will most benefit your users of your product.
By design serverless allows for a solution to scale automatically, this means that it doesn’t matter if you have 0, 1 or a million users of your service it will automatically handle the scale and load without any required maintenance of even thought from the architect or developer.
Serverless services are setup to be build to be highly available meaning that they are already designed to be protected to data centre failures
Speed and latency considered by design at scale this means that you should get the same performance per use if you have a few to millions of users. This means that there is not or a minimal degradation of service for your end users.
Event Driven architecture
Allows for loose coupling of services and events, meaning that any service can allow for an outage, however generally speaking serverless by design is typically highly available.
Reduced control at a server level
Due to the fact you don’t have to maintain the underlying hardware has huge advantages also means that you don’t get access to this and because of this if getting low level access to the OS or services is not possible.
Due to using a cloud providers service and building an architecture and work flow often means you are locked in to that vendors solution, moving a specific serverless architecture to another cloud provider can come with several development resign considerations should that ever be a requirement.
Cold starts can occur with certain services
Because you are not provisioning the hardware required for your services you are at the mercy of the service being provided to you, as its designed to scale automatically (which it will do) there can be a ramp up time as the services being made available to you do need to be ramped up which can come with some slight latency initially.
As serverless builds there is still a need to understand the connection between services, the constraints and how to implement it all together, this can mean that sometimes there is a learning curve to set it all up.
Rob Larter specializes in data analytics and big data solutions. More heavily working with data in the last 6 years and utilizing his 15+ years experience in web application development covering all areas from coding, planning, team management, innovation, data warehousing, ETL processing and data analytics and visualization.