“Effortless” Serverless — A Guide to using AWS Lambda in Production.

Anjul Garg
Nov 8 · 7 min read
AWS Lambda Logo

Amazon Web Services — Lambda has become the go-to solution when it comes to serverless computing and rightfully so.

Compared to it’s peers like Google Cloud and Microsoft Azure, AWS Lambda can run almost every popular language out there, namely Node.js, Python, Java, Ruby, C#, Go and PowerShell. Not to forget, it has seamless integrations with a plethora of AWS services.

Read the official documentation for more information.

If you merely read the brochure, going serverless seems like the obvious next step for your projects since it offers ease of deployments, hassle free infrastructure and cost benefits. However, things can easily go south if you are trying to use it at scale in your production applications.

This article outlines some of the right use-cases, best practices and things to avoid when using AWS Lambda for getting the most out of it.

Note :- This article is targeted towards developers starting out in their careers or have just begun working with cloud solutions. I am assuming that you have a basic knowledge of AWS Lambda and have used it in some capacity.

Is AWS Lambda right for you ?

Whenever you are considering Lambda for your next big project,
It can be a great solution if your application is one OR all of the following :-

  • A Web Application.
  • A Restful API.
  • A Mobile or an IOT application back-end.
  • A Periodic/Event Driven Job that runs for a small duration. For example, image compression, data conversion and cleansing, change data capture, log analysis and data backups.

Lambda is not a good choice if your application is one OR all of the following :-

  • A web-socket server.
  • A Batch Processing Job that demands very high CPU or long execution times.

Read the official documentation for case studies.

Understand the Pricing

According to the official Lambda documentation, AWS charges you basis on the number of requests and the duration of your functions. Sounds simple right ? It isn’t…

Lambda Pricing Explained

Note :- All prices are for N.Virginia region and don’t take the Free-Tier usage into account.

If you allocated 128MB memory to your function and executed it 100,000 times in a month, and it ran for 1 second each time, your usage would be calculated as follows :-

Compute Charges
Total Compute Seconds = 100,000 * 1 = 100,000 Seconds
Total Compute GB-seconds = 100,000 * 128/1024 = 12,500 GB-seconds
Cost = 12,500 * $0.00001667 = $0.20

Request Charges (Number of times your lambda is invoked)
Cost = $0.20 ($0.20 is charged for every 1 Million Requests)

Total Cost = $0.20 + $0.20 = $0.40

Writing Serverless code that Works Great

The underlying implementation of lambda launches containers for running your code and destroys them once it’s done. This process is called a “Cold Start”.

Since spinning up containers is a costly operation, if your lambda gets invoked often (usually once every 5 minutes), subsequent invocations processed by the same instance of your function can reuse the container. This is called a “Warm Start”.

Keeping your Lambda “Warm” provides the following optimization opportunities :-

  • If you initialize SDK and database connections outside the function handler code, subsequent lambda invocations by the same container will not re-initialize them. Providing huge performance gains.
  • Any files downloaded in the /tmp/ directory (limited to 512MB) during lambda invocation will persist through subsequent invocations by the same container.

Best Practices for structuring your code :-

  • Write your application logic in a method outside your handler method and call it inside the handler. This makes it easier to unit test your application logic.
  • Don’t store credentials and configurations inside your code. Instead, use environment variables in Lambda to separate your code from configuration.
  • Avoid using heavy packages and libraries with a large memory footprint or disk usage.
  • Use layers for shipping your code dependencies. Layers allow you to share common dependencies between multiple Lambdas, helps keep your deployment package size small, significantly reducing deployment time and makes your deployed lambda code editable from the AWS Lambda Console (very handy when you want to fix a production bug quickly).
  • Write Unit Tests and test your code locally before deploying it to Lambda. Input Stubs for handler code can be easily generated as AWS provides sample input data for all possible lambda integrations. Moreover, tools like Serverless allow lambda-like execution of your code locally.

Hacks for ensuring that your code runs on Lambda as expected :-

  • Develop locally using the language and version specified in your lambda configuration. For example, if your lambda is configured to run on Python-3.7, write and test your code in Python-3.7.
  • Always use virtual environment (venv or virtualenv) if you are using Python. Utilize Docker for all other languages. Isolation of environments goes a long way to making your code behaviour expected and stable.
  • If you are using a dynamic language like Node or Python, a quick and dirty way of discovering almost all import and syntax errors locally is by executing the file containing your handler code.
    For example, if you are using python and index.py contains your handler function; simply run python index.py before you deploy your code.
  • If you are creating and uploading layers for your functions, ensure that contained libraries were built using the same language and version your Lambda is running on. Ignoring this will cause errors when using compiled libraries, for example, psyopg2 and confluent-kafka in Python.

Essential security practices :-

  • Create One IAM Role per Lambda function. This Role should contain all the IAM Policies required to run your code and interact with other AWS services. Avoid sharing Roles between Lambdas.
  • Don’t store AWS Credentials inside your code or environment variables. If a proper IAM Role is attached to your Lambda, you will be able to use AWS SDK like Boto3 without explicitly providing credentials.
  • If you must have your credentials in the code, you can utilize KMS for storing encrypted values and decrypting them on-the-fly.

Logging is your Friend, And your Biggest Enemy

All of us have had our fair share of debugging Lambda functions. Let’s admit it, it‘s’ not the easiest job in the world. Most developers end up logging the input data, the output data and everything in between.

Lambda writes all your logs to the CloudWatch Logs. This can work great for debugging your code in a development environment.

Doing this in a production at a high scale can end up burning your pockets. Your CloudWatch costs will skyrocket and it might cost you more than your actual Lambda billing.

If you really must log everything, try using cheaper alternatives like ElasticSearch or store the logs on S3.

Don’t Stress, just setup CloudWatch Alerts

As your organization grows, the number of Lambda functions in use could get out of hand and unmanageable. Even for a team of seasoned developers using the latest tools, monitoring all your Lambdas real-time is daunting.

A simple way to be stress-free is to setup alerts on 3 key parameters for each of your lambda. Throttles, Errors and Timeouts.

Setting up CloudWatch Alerts is easy and cheap. You will no longer need to stare at your Lambda Consoles for monitoring.

Use Versioning for A-B Testing and Quick Roll-back

You can use versions to manage the deployment of your AWS Lambda functions. For example, you can publish a new version of a function for beta testing without affecting users of the stable production version.

The system creates a new version of your Lambda function each time that you publish the function. The new version is a copy of the unpublished version of the function. The function version includes the following information:

  • The function code and all associated dependencies.
  • The Lambda runtime that executes the function.
  • All of the function settings, including the environment variables.
  • A unique Amazon Resource Name (ARN) to identify this version of the function.

Read the Official Documentation for more details.

Squeeze every bit of Performance

AWS Documentation states that Lambda allocates CPU power linearly in proportion to the amount of memory configured. At 1,792 MB, a function has the equivalent of 1 full vCPU (one vCPU-second of credits per second).

This feature of Lambda is not very well documented and is still shrouded in mystery. Multiple independent studies and comments from Amazon Spokespersons revealed that Lambdas with more than 1.8GB of memory are multi core; and multi-threaded or multi-process code is needed to take advantage of the additional performance.

If your Lambda code is compute intensive, it will benefit you from running your code multi-threaded and playing around with memory configurations to find the sweet spot. Your ideal configuration should minimize your costs.

Here is a great article I found which explains how AWS allocates virtual processors to your Lambda functions based on memory configurations :-
https://engineering.opsgenie.com/how-does-proportional-cpu-allocation-work-with-aws-lambda-41cd44da3cac

Bottom-line

Although, AWS Lambda is an easy-to-use service and is being adopted at a staggering rate. Using it efficiently for production environments at high scale requires some discipline and expertise.

Most of the information in this article is available freely on the internet and some of it reflects my opinions based on the challenges I faced while using Lambdas.

This article is not meant to be an exhaustive list of best practices for using AWS Lambda.

Remember, mistakes can be reduced but can never be eliminated.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade