CDK Pipeline for C++ Lambda Deployment + API Gateway

Patrick O'Connor
Geek Culture
Published in
7 min readApr 23, 2021

This article was built referencing the AWS CDK documentation here. Feel free to reference the site for official AWS CDK documentation. And for C++ on lambda referenced here.

Source: https://docs.aws.amazon.com/cdk/latest/guide/home.html

In this lab we will focus on two core ideas. One, being the deployment process of a custom runtime C++ to lambda, and two how we can build not only a CI/CD pipeline on top of that idea, but how can we do it through the AWS CDK.

  • Part 1: C++ Lambda
  • Part 2: AWS CDK pipeline

Now if you’re anything like me back in 2019, early in my tech career learning the cloud for the first time while still at university, when the CDK was released in July of 2019, you didn’t pay attention. Don’t get me wrong, the hype was there, from articles to my very own manager strongly suggesting I learn it. However, in the midst of trying to learn the fundamentals of the cloud, I let the opportunity slip by, until a few weeks ago.

If there’s only one thing you’ll take from this lab, mark these words, Go Learn the AWS CDK. The CDK is such a powerful tool, it builds upon cloudformation making it ever so easy to deploy full pipeline infrastructure with only a few commands. Quite literally infrastructure as code, making it such a powerful tool I can see myself using in the not too long future.

C++ on Lambda

Since lambda was the end goal for me, pushing to it, I choose to use the native AWS IDE service Cloud9, as my primary development environment for deploying manually to lambda. The reasons for this included:

  • Automatic AWS credentials generated so that your IDE can use the full suite of AWS resources natively.
  • Should you break the IDE, you can simply spin up a brand new one.
  • Consistency, following this lab should be consistent as Cloud9's default image will come with the same system configurations I would of used when running the lab.

Setup your Cloud9 IDE

Log in to your AWS account and navigate to the cloud9 service.

From the service > Create environment and give your IDE a name.

The only thing you’ll need to change on the configure settings page is to make sure you set your instance type to t3.medium. This will help balance compute load, as a micro might freeze up.

Then click Next Step > create Environment and you should then be redirected to your newly created IDE.

Add memory

Last thing we wanna do is add memory to our cloud9 so it doesn’t run out of space, we can do this by creating a resize.sh file to do this for us.

cd ~/environment
touch resize.sh

This will create a resize.sh file in your cloud9, which you can then copy the below code to, which comes from here.

#!/bin/bash# Specify the desired volume size in GiB as a command line argument. If not specified, default to 20 GiB.
SIZE=${1:-20}
# Get the ID of the environment host Amazon EC2 instance.
INSTANCEID=$(curl http://169.254.169.254/latest/meta-data/instance-id)
# Get the ID of the Amazon EBS volume associated with the instance.
VOLUMEID=$(aws ec2 describe-instances \
--instance-id $INSTANCEID \
--query "Reservations[0].Instances[0].BlockDeviceMappings[0].Ebs.VolumeId" \
--output text)
# Resize the EBS volume.
aws ec2 modify-volume --volume-id $VOLUMEID --size $SIZE
# Wait for the resize to finish.
while [ \
"$(aws ec2 describe-volumes-modifications \
--volume-id $VOLUMEID \
--filters Name=modification-state,Values="optimizing","completed" \
--query "length(VolumesModifications)"\
--output text)" != "1" ]; do
sleep 1
done
#Check if we're on an NVMe filesystem
if [ $(readlink -f /dev/xvda) = "/dev/xvda" ]
then
# Rewrite the partition table so that the partition takes up all the space that it can.
sudo growpart /dev/xvda 1
# Expand the size of the file system.
# Check if we are on AL2
STR=$(cat /etc/os-release)
SUB="VERSION_ID=\"2\""
if [[ "$STR" == *"$SUB"* ]]
then
sudo xfs_growfs -d /
else
sudo resize2fs /dev/xvda1
fi
else
# Rewrite the partition table so that the partition takes up all the space that it can.
sudo growpart /dev/nvme0n1 1
# Expand the size of the file system.
# Check if we're on AL2
STR=$(cat /etc/os-release)
SUB="VERSION_ID=\"2\""
if [[ "$STR" == *"$SUB"* ]]
then
sudo xfs_growfs -d /
else
sudo resize2fs /dev/nvme0n1p1
fi
fi

This file will automate the resize process for you through cli. To run it run the following.

bash resize.sh 20

Installing Dependencies

In order to deploy to lambda we must package our code into a zip file digestible by lambda. In order to do this we require the two libraries of:

  • aws-sdk-cpp
  • aws-lambda-cpp

First let’s install cmake3 and necessary packages with the following commands in your terminal

sudo yum install libcurl-devel cmake3 -y

Then we are going to install the AWS C++ SDK.

cd ~/environment
git clone --recurse-submodules https://github.com/aws/aws-sdk-cpp
cd aws-sdk-cpp
mkdir build
cd build
cmake3 .. -DBUILD_ONLY=s3 -DBUILD_SHARED_LIBS=OFF -DENABLE_UNITY_BUILD=ON -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=~/environment/out
make && make install

This will build only the S3 library which we plan to use when we deploy to lambda.

Next we will need to download the c++ runtime and compile it

cd ~/environment
git clone https://github.com/awslabs/aws-lambda-cpp.git
cd aws-lambda-cpp
mkdir build
cd build
cmake3 .. -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=OFF -DCMAKE_INSTALL_PREFIX=~/environment/out
make && make install

This will compile the necessary static libraries and will store them in the ~/environment/out folder.

Awesome you should now have all the necessary dependencies to build your lambda.

Deploy Lambda (Manually)

In this section I’ll show you how to manually deploy a test function to lambda.

First clone the example project.

cd ~/environment
git clone https://github.com/oconpa/s3-cpp-read.git
cd s3-cpp-read

Once you’re in the folder we can build the package with the following commands.

mkdir build && cd build
cmake3 .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_PREFIX_PATH=~/environment/s3-cpp-read/out
make
make aws-lambda-package-s3Read

If all went well, you should now see a zipped file generated in s3-cpp-read/build, which is what you’ll upload to your lambda. Feel free to reference this to do it manually, otherwise the following CDK section will do that all and more.

CDK Pipeline (CI/CD)

So you have your lambda working and are now ready to put it into a pipeline. For this we’re going to use the AWS CDK. With a few commands to the terminal your pipeline should be up and running.

From your cloud9, as the cdk comes preinstalled, you’ll need to pull the cdk pipeline code.

cd ~/environment
git clone https://github.com/oconpa/cdk-pipeline-cpp-lambda.git
cd cdk-pipeline-cpp-lambda

From here you should be able to run the following commands and have your complete infrastructure spun up in the cloud.

If you receive any errors about missing modules when running the cdk bootstrap command below, run npm install again, as the module may of not installed the first time.

npm install
cdk bootstrap
cdk deploy

The commands above will download the relevant dependencies, bootstrap your account for cdk, synthesise your code into a cloudformation template and deploy it to the cloud.

If you have a look in the lib/pipeline-stack.ts file, you’ll see how everything interconnects.

import * as cdk from '@aws-cdk/core';
import * as codecommit from '@aws-cdk/aws-codecommit';
import * as codebuild from '@aws-cdk/aws-codebuild';
import * as codepipeline from '@aws-cdk/aws-codepipeline';
import * as codepipeline_actions from '@aws-cdk/aws-codepipeline-actions';
import * as s3 from '@aws-cdk/aws-s3';
import * as lambda from '@aws-cdk/aws-lambda';
import * as apigw from '@aws-cdk/aws-apigateway';
import * as iam from '@aws-cdk/aws-iam';
export class PipelineStack extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const bucket = new s3.Bucket(this, 'C++ Bucket');

const func = new lambda.Function(this, 'MyFunction', {
runtime: lambda.Runtime.PROVIDED,
handler: 's3Read',
code: lambda.Code.fromAsset('./lambda/s3Read.zip'),
environment: {
'Bucket': bucket.bucketName
}
});

bucket.grantReadWrite(func);
new apigw.LambdaRestApi(this, 'Endpoint', {
handler: func
});
const repository = new codecommit.Repository(this, 'CppOnLambdaRepo', {
repositoryName: "CppOnLambdaRepo",
description: 'Where you will store your CDK for C++ Lambda'
});

const project = new codebuild.PipelineProject(this, 'CppOnLambda', {
projectName: 'CppOnLambda',
cache: codebuild.Cache.bucket(new s3.Bucket(this, 'Bucket')),
environment: {
buildImage: codebuild.LinuxBuildImage.AMAZON_LINUX_2_3
},
buildSpec: codebuild.BuildSpec.fromObject({
version: '0.2',
phases: {
install: {
commands: [
'echo Entered the install phase...',
'yum install -y cmake3',
'cmake3 --version'
]
},
build: {
commands: [
'echo Entered the build phase...',
'mkdir build && cd build',
'cmake3 .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_PREFIX_PATH=out',
'make',
'make aws-lambda-package-s3Read'
]
},
post_build: {
commands: [
'echo Publishing code at `date`',
'aws lambda update-function-code --function-name ' + func.functionName + ' --zip-file fileb://s3Read.zip'
]
}
}
})
})
project.addToRolePolicy(new iam.PolicyStatement({
resources: [func.functionArn],
actions: ['lambda:UpdateFunctionCode']
}));
const sourceOutput = new codepipeline.Artifact();
const sourceAction = new codepipeline_actions.CodeCommitSourceAction({
actionName: 'CodeCommit',
repository,
output: sourceOutput,
});
const buildAction = new codepipeline_actions.CodeBuildAction({
actionName: 'CodeBuild',
project,
input: sourceOutput
});
new codepipeline.Pipeline(this, 'MyPipeline', {
stages: [
{
stageName: 'Source',
actions: [sourceAction],
},
{
stageName: 'Build',
actions: [buildAction],
}
],
});
}}

To see if the deployment has succeeded, navigate to your AWS account to see the following services:

  • CodeCommit: You should see a new cpp repo, it should be empty upon creation, but this is where you would upload your code to build. If you upload the repo we cloned above, s3-cpp-read, into the repo, this would trigger a deployment.
  • CodePipeline: In CodePipeline you should see a new pipeline linked to the repo just created. It would have failed first as there is no code yet in the repo, but once you upload it would build and deploy to lambda.
  • Lambda: The code running your c++ custom runtime would exist, and just to illustrate it works, navigate to the function > test page and invoke with the following parameters:
{
"s3bucket": "Name of a bucket in your account",
"s3key": "Name of a text file or image in that bucket"
}

If you invoke with that, you should receive the files content.

  • API Gateway: Last but not least an API would have also been created linking to the lambda function.

This file is set up to run all the infrastructure necessary for this ci/cd pipeline. The CDK is a very powerful tool when it comes to more of a DevOps side of things. The only way to truly appreciate it is to use it in your own projects, so once you’ve finished and had a look at this project, I’d encourage you to take your learnings and apply them to your personal projects.

--

--

Patrick O'Connor
Geek Culture

WorldWide Prototyping Engineer at Amazon Web Services (AWS). My opinions are my own.