The CI / CD pipeline of Github Action for serverless lambda function containerization deployment.

Dr. Tri Basuki Kurniawan
TheLorry Data, Tech & Product
7 min readAug 13, 2021
Photo by EJ Strat on Unsplash

A short tutorial on automatically deploying a Fastapi project into AWS lambda function and setting up the AWS API Gateway via Github Action.

CI/CD pipeline in Github Action

A few advantages of using Continous Integration and Continous Deployment, often known as the CI/CD pipeline, include automatic running unit testing and automated deployment of our code to a server, allowing us to know when something is wrong with our code during early-stage deployment.

Other advantages include the ability for a developer to focus on developing code and monitoring the system's behavior in production and QA and product stakeholders to access the newest easily, or any, version of the system. Overall, we were not worried about the product upgrading processing anymore. As a result, I highly advise all developers to incorporate CI/CD into their dev-ops workflow.

One of the most popular repository systems among programmers, Github offers CI/CD processing features called Github Action.

Github Action

Using this system, we can set up our CI/CD process easily. I will explain it in detail later.

AWS Lambda function

AWS Lambda is one of the most popular serverless computing services these days. Lambda allows you to trigger execution in response to AWS events AWS, enabling serverless backend solutions. The Lambda Function itself includes source code and runtime configuration.

Why do we need the AWS lambda function?

Here are some responses to that question: most major languages are supported out of the box. There is no infrastructure to manage, and it connects to API Gateway and other connection points.

Sample Tutorial

Let’s get started with our lesson after we go over a few things about the CI/CD pipeline in Github Action and the lambda function in general.

First, please download our sample code from the Github repo here, or you can use your own Github repository and add a few files into it, as I will explain later.

sample code in Github repository

Now, please open the folder .github/workflows. You will found a file called pipeline.yml. Open it; you will get the code something like this.

This file is a configuration for CI/CD processing in Github Action.

name: CI/CD Pipeline on:  
push:
branches: [ main ]
jobs:
continuous-integration:
runs-on: ubuntu-latest
steps:
.
.
.
continuous-deployment:
runs-on: ubuntu-latest
needs: [continuous-integration]
if: github.ref == 'refs/heads/main'
steps:
.
.
.

Let’s examine line by line. This workflow is called, CI/CD Pipelineand it will be running when we push our code into mainbranch in our repository (you can change mainbranch into your branch’s name).

There are two jobs in this workflow; the first is continous-integreation and the second one is continous-deployment , which in each job will have a few steps. For continous-deployment part, there is a condition that will run if only if the continous-integreation job is already finished and running successfully. Both of the jobs will run on the latest Ubuntu operating system.

continuous-integration

The code in the continuous integration section prepares the environment for running our testing code. This section contains several stages. First, we must install the Python version that we will use to test our code. Later, we must install our dependencies module into our development virtual Python environment, and the final step is to execute our testing code.

    steps:      
- uses: actions/checkout@v2
- name: Set up Python all python version
uses: actions/setup-python@v2
with:
python-version: 3.7
architecture: x64
- name: Install Python Virtual ENV
run: pip3 install virtualenv
- name: Setup Virtual env
uses: actions/cache@v2
id: cache-venv
with:
path: venv
key: ${{ runner.os }}-venv-${{ ->
hashFiles('**/requirements*.txt') }}
restore-keys: |
${{ runner.os }}-venv-
- name: Activate and Install Depencies into Virtual env
run: python -m venv venv && source venv/bin/activate && ->
pip3 install -r requirements.txt
if: steps.cache-venv.outputs.cache-hit != 'true'
- name: Check File List
run: ls
# Install all the app dependencies
- name: Install dependencies
run: pip3 install -r requirements.txt
# Build the app and run tests
- name: Build and Run Test
run: pytest

The first line in the continous-integreation section is actions/checkout@v2 the statement, which includes a method to retrieve the most recent version of our code from Github. It also has to do with obtaining all authentication keys and security problems on Github. Later, we need to install a Python module into that machine. To use a virtual environment, we must first install the virtualenv module on that machine. We install the module using pip3 install virtualenvthe command, and then we call this line to establish a new virtual environment.

python -m venv venv && source venv/bin/activate &&                ->
pip3 install -r requirements.txt

Then, after everything is in place, we execute the pytest command to run all of the test scripts in our code.

continuous-deployment

We proceed with the continous-deployment phase when the continous-integreation phase has been completed and is operating well.

    steps:      
- name: Install AWS CLI
uses: unfor19/install-aws-cli-action@v1
with:
version: 1
env:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ ->
secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_DEFAULT_REGION }}
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ ->
secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_DEFAULT_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v1
- name: Check out code
uses: actions/checkout@v2
- name: Create docker images
run: docker build -t 240833302201.dkr.ecr.ap-southeast- ->
1.amazonaws.com/test_lambda:lambda .
- name: Upload docker into ECR
run: docker push 240833302201.dkr.ecr.ap-southeast- ->
1.amazonaws.com/test_lambda:lambda
- name: Install serverless framework
run: curl -o- -L https://slss.io/install | bash
- name: Deploy to Lambda Function and API Gateway
run: npx serverless deploy --stage dev

In this section, we must first install AWS CLI, which is a command-line provided by Amazon Web Services to access their capabilities via the command line. To gain access to AWS functions, we must supply full AWS account credentials. Later, we must set up AWS credentials by creating a configuration file including an aws-access-key-id and an aws-secret-access-key.

After generating the configuration file on that machine, we must retrieve that code and place it back into their environment using the actions/checkout@v2 command. The next step is to log into AWS ECR using our prior credentials. Then we build our Docker image, which we then push into ECR.

The last steps involve installing serverless framework and running a command to deploy our serverless script into our AWS Lambda Function.

Serverless framework

We finished the continous-deployment process by running npx serverless deploy --stage dev. We will run a script in a file named serverless.yml and install it in the dev stage of AWS Lambda Function.

We show the script of serverless.yml the file.

service: test-lambda frameworkVersion: "2" provider:  
name: aws
stage: ${opt:stage}
region: ap-southeast-1
lambdaHashingVersion: 20201221
memorySize: 1024
timeout: 30
apiName: ${self:service}-${opt:stage}
apiGateway:
description: REST API ${self:service}
metrics: true
functions: ${file(functions.yml):functions}

In this script, we define the name of service, which is test-lambda. Then we use frameworkVersion: "2", and we define our provider and functions part. In provider part, we set for name, stage, region, lambdaHasingVerison, memorySize, timeout, apiName and apiGateway configuration.

For functions part, we define in another file, we called functions.yml. In this file, we define of apiGateway configuration in detail.

functions:  
test_lambda:
image: 240833302201.dkr.ecr.ap-southeast- ->
1.amazonaws.com/test_lambda:lambda
events:
- http:
path: /
method: get
cors: true
- http:
path: hello/
method: any
cors: true
- http:
path: docs/
method: get
cors: true
- http:
path: redoc/
method: get
cors: true

We called this function is test_lambda which define image and events value. For image we set with the location of our Docker image in our AWS ECR and for events, we define in detail all of our paths to apiGateway access our endpoints.

When we submit our modified code into Github’s main branch, Github Action will execute our script one by one and display the results in the Github Action section area, as seen in the image below.

Conclusions

This tutorial provides a short explanation of deploying a Fastapi framework using CI/CD in Github Action into AWS Lambda Function based on Docker image as a resource. We explained how to create a CI/CD script file and explained it in quite a detail. Then we explained how to use serverless framework and demo it with a detailed script file.

Thank you very much.

--

--

Dr. Tri Basuki Kurniawan
TheLorry Data, Tech & Product

Loves artificial intelligence learning including optimization, data science and programming