With the rise of serverless technologies, services such as AWS Lambda have become increasingly popular. This article gives you an introduction to the features and benefits of using AWS Lambdas and why it is not yet ready to replace conventional EC2.
Advantages of AWS Lambda:
AWS Lambda by far offers many advantages over conventional Servers. Why we at Chai Point decided to go for AWS Lambda is because of the following 3 major advantages:
1. Cost Saving
One of the major advantages of Lambdas is that you are only charged for the time it takes for the function to run. This means that functions that are called once every 10 minutes and take 2 seconds to run are charged for just 288 seconds per day (144 times a day x 2 seconds). That equates to 0.17 cents a month or $0.02 a year!
This is a huge leap for fine-grained billing in order not to pay for spare compute resources comparing to the hourly based billing of EC2. While most of the PaaS offerings are designed to be running 24/7, Lambda is completely event-driven, it will only run when invoked. This is perfect for application services having quiet periods followed by peaks in traffic.
2. Ease of use: Save on Server management
With AWS Lambda you don’t need to take care of setting up an EC2 server, its security, availability, scalability, and maintenance. With AWS Lambda you only need to focus on your application code, on your business logic, and leave the server maintenance, provisioning and scalability to AWS for a good price.
3. Event-Driven Function Invocation
AWS Lambda seamlessly integrates with almost all the other AWS services like SNS, S3, Cloudfront, DynamoDB, etc. This opens up the possibility of event-driven function calling phasing out the old traditional method of polling and checking every time for any change.
Now before diving into the limitations of AWS Lambda lets take a look at the Tech Platform of Chai Point(Shark) which could have driven by AWS Lambda.
The Tech Side of Chai Point: Shark
Chai Point, India’s largest organized Chai retailer, with over 150+ stores and over 1000+ boxC(IoT Enabled Chai and Coffee vending machines) are designed for corporate which serves approximately 250k cups of chai per day from all the channels.
That said Chai Point manages its store and vending machine business with the help of its in-house developed platform Shark. Shark is an AWS hosted cloud solution platform for an omnichannel business like Chai Point. Shark is built on microservice architecture and serves Chai Point in various aspects like:
- Store Management
- Supply Chain Management
- Order Management
- Third-Party Integrations
- BoxC Infrastructure
- Web, Android, and IOS Channels
- Staff Management
- Command Centre
Why we explored AWS Lambda for Shark
Most of the Chai Point’s stores and boxC machines typically run between 7 AM to 9 PM leaving behind a few exceptional 24 hours stores and corporate offices. This makes Chai Point a perfect use-case for AWS Lambda as during non-office hours there would be fewer requests on the server. This makes sense for Chai Point to move its Shark Platform to AWS Lambda saving a ton of money and effort managing EC2 servers for every microservice developed to support the platform. And hence we started building our microservices on AWS Lambda.
Limitations of AWS Lambda
Out of all the other limitations I am going to mention about AWS Lambda this is one of the most critical and deciding factors for us to move back the Shark infrastructure to EC2.
What is a cold start?
A latency experienced when you trigger a function.
A cold start only happens if there is no idle container available waiting to run the code. This is all invisible to the user and AWS has full control over when to kill containers.
What is the effect of cold-start?
Frustrated users because of slow response
Paying more money for speed (sometimes)
Timeouts in the calling function if not thought through — chain reaction
When should you care?
If you are using a statically typed language like Java and C#
If you have a customer facing application
If your request volume is low or sparse
Once you deploy a new version (all containers are destroyed)
What are the factors which increase the cold-start time?
The language choice
Things that require classpath scan (Java)
What are the solutions to this problem?
First, accept that you can’t guarantee that you won’t experience cold-starts. The ultimate solution must come from the cloud provider. We can only try to improve.
Do nothing if it is not a huge problem (recommended)
Wait for AWS to improve it
Increase memory (and pay more)
Do some warm-up(We did but it wont work)
AWS Lambda has a limit of 50 MB as the maximum deployment package(compressed .zip/.jar) size and a limit of 250 MB size of code/dependencies that you can zip into a deployment package (uncompressed .zip/.jar).
For languages like Java maintaining a package size to 50 MB is difficult. This boils down to breaking the functionality into multiple Lambda Functions to maintain the package size below 50 MB making us maintain more Lambda Functions.
Lambda functions write their logs to CloudWatch, which currently is the only tool to troubleshoot and monitor your functions. Moreover, it takes a delay of 1–2 minutes for logs to appear in the CloudWatch which makes it difficult for immediate debugging in a test environment.
There are other limitations such as which didn’t affect our functioning such as:
Maximum Memory allocation of 3 MB
Maximum Function Timeout of 15 minutes
Invocation payload limit (6 MB for synchronous call and 256 KB for asynchronous calls)
Maximum size of all the packages in a region is limited to 75 GB combined
/tmp storage is limited to 512 MB
The above shortcomings with AWS Lambda made us take a tough call to move our infrastructure to EC2.
AWS Lambda sure is a great technology in serverless space and is useful in deploying and managing functions that are designed to do small tasks that are independent. But when it comes to deploying it in enterprise solutions where there are inter-services dependencies I think there is still time especially for languages like Java.
Stay tuned for the next blog on the architecture we used to deploy all our microservices on EC2 without burning a hole in our pockets with hefty EC2 server costs.
© 2019 Chai Point