Scale Your Python Backend with Serverless Architecture

Jay Patel
Simform Engineering
7 min readSep 11, 2023

Explore how to easily scale your backend with serverless architecture to handle concurrent API calls.

Have you ever gotten the dreaded “server timeout” error when your backend couldn’t handle a sudden spike in traffic? Or found yourself stuck managing and scaling servers when you’d rather focus on coding new features?

If this sounds familiar, serverless architecture may be the solution you’re looking for. With serverless, you can effortlessly scale your backend on demand without worrying about infrastructure.

In this post, we’ll explore serverless architecture, how it offers more scalability than traditional servers, tools to achieve it, potential drawbacks like cold starts, and solutions.

By the end, you’ll understand the core benefits of serverless for backends and have the knowledge to adopt it in your own projects to handle increasing API traffic and scale faster than ever before.

Let’s get started!

What is serverless architecture?

Serverless architecture is a model of building and running applications where the responsibility of managing and provisioning servers is shifted to a cloud provider, freeing developers to focus on writing code. A cloud provider handles dynamically provisioning and scaling the infrastructure required to execute code on demand. This architecture is enabled through Function-as-a-Service (FaaS) offerings from cloud providers.

Function as a Service

FaaS is a way of deploying functions on the cloud and invoking those functions based on different events like HTTP requests received on an API Gateway or when a message is pushed on a queue or a data stream. There are actual servers behind the scenes, but as mentioned earlier, the headache of managing and provisioning them is dealt with by the cloud service provider. A few examples of FaaS are AWS Lambda, Azure Cloud Functions, and Google Cloud Functions. One key advantage of FaaS is that you can use it on a pay-per-execution basis.

How does the traditional server-full architecture scale?

In the traditional server-full architecture, the application is deployed on a rented server/virtual machine, and there is a limited number of requests that the server can handle at the exact same time. And to scale further, we can provision more server/virtual machines.

Limitations of the traditional approach and how serverless architecture overcomes it

The limitation of the above approach is that it takes a lot of time to provision new servers on the fly when our backend experiences a sudden spike in traffic. Serverless architecture can handle it better as the cloud service provider handles it on its own. Every time a new HTTP request is received in a serverless architecture, the FaaS is invoked concurrently, giving more scalability to our infrastructure.

However, this flexibility can add more complexity if the backend uses a lot of third-party package/library support because the FaaS only allows adding code to the function up to a certain limit. To overcome this, we might need to use some open-source tools available like Serverless, AWS SAM(Serverless Application Model), etc. These tools can help you zip your code with the dependent libraries and push it on the FaaS. We can even create a Docker image with all the dependencies, push the image to a container repository, and configure the function to containerize the image and execute it.

Challenges with serverless architecture

Cold Start is one of the most common problems faced while developing serverless applications. The cold start problem occurs because the FaaS needs to initialize the environment for the function, which includes setting up the container, loading the runtime, and initializing any dependencies. This initialization process can take anywhere from a few milliseconds to several seconds, depending on the complexity of the function and the resources required. This problem is more visible in languages like Java; it can take up to a minute to execute the first request after a long window where the function is not invoked.

How to overcome the challenges of serverless

To mitigate the cold start problem, serverless platforms use techniques such as pre-warming containers, caching frequently used resources, and optimizing the initialization process. Additionally, developers can design their functions to minimize cold starts by keeping the function code small and avoiding heavy dependencies. In the case of AWS Lambda, it provides a feature called provisioned concurrency, which allows you to specify the number of containers that you want pre-warmed, initialized, and ready to serve the requests. However, adding provisioned concurrency would cost you extra. Also, AWS gives a limit of a maximum of 1000 Lambda executions concurrently per account, but that limit can be exceeded by requesting extra quotas.

Example:

Make a file called app.py that has a lambda_handler, which establishes a database connection using the psycopg2 library, fetches all the orders, and responds with order data:

# app.py
import json
import psycopg2
from psycopg2.extras import DictCursor

def lambda_handler(event, context):
# Connect to the PostgreSQL database
conn = psycopg2.connect(
host="your-database-host",
port="your-database-port",
database="your-database-name",
user="your-database-username",
password="your-database-password"
)

# Create a cursor object with DictCursor
cursor = conn.cursor(cursor_factory=DictCursor)

# Execute the SQL query to fetch orders
cursor.execute("SELECT * FROM orders")

# Fetch all rows from the result set as dictionaries
rows = cursor.fetchall()

# Close the cursor and database connection
cursor.close()
conn.close()

return {
"statusCode": 200,
"body": json.dumps(rows, default=str)
}

Then, to deploy this as AWS Lambda functions and map it to AWS API Gateway for a particular path, we will have to use AWS SAM.

AWS SAM (Serverless Application Model) is an open-source framework provided by Amazon Web Services (AWS) that simplifies the development and deployment of serverless applications on the AWS Cloud. It is an extension of AWS CloudFormation, which allows you to define serverless resources such as AWS Lambda functions, Amazon API Gateway endpoints, Amazon DynamoDB tables, and more using a simple and concise YAML or JSON syntax.

SAM provides a higher-level abstraction for serverless application development, reducing the complexity of configuring and managing resources. With SAM, developers can define their application’s architecture, event sources, and dependencies in a single template, making it easier to version control, deploy, and manage the infrastructure and code as a unit.

To define infrastructure in AWS SAM, we need to create a template.yaml or template.json file that mainly comprises:

  1. Resources: Defines the serverless resources required for the application, such as Lambda functions, API Gateway endpoints, and AWS RDS Instance.
  2. Events: Specifies event sources that trigger the Lambda functions. It can be services like API Gateway, S3, SQS, or other AWS services.
  3. Outputs: Exposes values from the resources, which can be used as references in other AWS resources or applications.
# template.yaml
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'

Resources:
HelloWorldFunction:
Type: 'AWS::Serverless::Function'
Properties:
Handler: app.lambda_handler
Runtime: python3.8
CodeUri: .
Events:
HelloWorldApi:
Type: Api
Properties:
Path: /orders
Method: get

After creating the SAM templates, we need to execute some SAM commands to build and deploy the application:

  1. sam build: The sam build command is used to build the serverless application code and its dependencies. It automatically detects the requirements.txt in the current directory (like psycopg2 in the above example) and installs the required dependencies specified in your application code (pip packages for Python), ensuring your application is ready for deployment.

To build your serverless application, navigate to the project directory and run the following command:

sam build -t <template-name>

This will analyze your code, resolve dependencies, and create a deployment-ready package. The output artifacts will be stored in a .aws-sam/build directory.

  1. sam deploy: The sam deploy command is used to package and deploy your serverless application to AWS. It takes the output generated from the sam build command and uses the SAM template to create or update the AWS resources needed for your application.

To deploy your serverless application, run the following:

sam deploy --stack-name <stack-name> --region <region>

Replace <stack-name> with a unique name for your CloudFormation stack, and <region> with the AWS region where you want to deploy your application.

Additional References:

Advantages of serverless architecture

  • Scalability and reliability without server management: The scalability of your backend is the cloud provider’s responsibility. It spawns new environments for your lightweight functions to run for almost every new request.
  • Priced per execution: You are priced only for the amount of time your code is being executed. Hence, you will be charged less when the traffic is low and high when the traffic increases. There is no need to rent servers for the entire day and pay for them even when no one uses them.

Disadvantages of serverless architecture

  • Vendor Lock-in: Serverless computing relies heavily on public cloud providers such as AWS, Azure, or GCP. While this can offer convenience and scalability, it also means you become tightly integrated with a specific cloud ecosystem. If there are requirements or scenarios where running applications on on-premise servers is necessary, serverless might not be the most suitable or flexible solution. This vendor lock-in can limit your options and increase the complexity of migrating to alternative platforms. It’s crucial to carefully consider your long-term strategy and potential exit plans when adopting serverless architecture.

Conclusion

Serverless architecture offers a compelling way to scale application backends without infrastructure headaches easily. By leveraging FaaS services like AWS Lambda, you can deploy event-driven functions that automatically scale based on traffic.

While there are some drawbacks like cold starts and vendor lock-in to consider, the benefits of automatic scaling, paying only for execution time, and focusing on code rather than ops are quite appealing.

By understanding the core concepts of serverless and FaaS, you now have a framework to adopt a serverless approach in your projects. Focus on writing lean functions, leverage orchestration tools like SAM, and let the cloud provider handle provisioning infrastructure dynamically. This can unlock new levels of productivity and scale for backend developers.

For more updates on the latest tools and technologies, follow the Simform Engineering blog.

Follow Us: Twitter | LinkedIn

--

--