Capital One Tech
Published in

Capital One Tech

Serverless Computing with Swift

Part 1: Swift and AWS Lambda

Image of Swift Logo

Why Serverless Swift?

A separate issue from the question of whether serverless computing is worth pursuing, is why implement serverless systems in Swift?

There are three main reasons that make Swift a good candidate for a Lambda implementation language. First, Swift is a powerful, robust, and expressive language designed to be productive in a wide variety of contexts, including server-side computing. By itself, that’s not much of a differentiator as there are several other languages (Rust, Go) that can be described that way. However, Swift offers two additional possibilities — the opportunity to take advantage of an existing pool of developers, and the opportunity to share code across multiple layers of your system, in particular both the back-end and a mobile client.

Let’s consider an example scenario.

An Example

I’ve decided to form a startup — It’s The Yeast I Could Do, an online bakery specializing in gourmet bread. Since no self-respecting bakery would be caught dead without a robust cloud infrastructure, my first priority is to build a microservice to handle sending receipts to customers.

Specifically, I want a service whose input is a list of items to order. An item is a type of bread and a quantity; for example, three croissants. The input is encoded in JSON. The output is a string representation of the receipt. It lists each ordered item, its sub-total, and a total for the entire order. For now, I won’t worry about making the receipt too pretty.

I start by ignoring the networking and writing code I can use as a command line tool. First, I will create a new directory and use the Swift Package Manager (SPM) to create a Swift application. Note that the application will be named bru.

mkdir bru
cd bru
swift package init —type=executable

Now, I specify the data types. With an eye towards re-usability, I define these in a separate module from the main application. I create a directory Sources/bruModels and in that directory, I create files Item.swift, Order.swift and Receipt.swift with the following code (complete listing available at

// Item.swiftenum Style: String, Codable {
case croissant
case naan
case pumpernickel
case rye
struct Item: Codable {
let amount: Int
let style: Style
// Order.swiftstruct Order: Codable {
public private(set) var items: [Item]
}// Receipt.swiftstruct Receipt: Codable, CustomStringConvertible {}

I’m relying on the new Codable protocol in Swift 4 to magically deal with converting data to and from JSON. Properly combining Swift, Codable, and JSON can sometimes be tricky. But I’ll discuss the possible difficulties in another blog post. For this example, the serialization is simple and I can use the auto-generated serialization code.

With the data types specified, I turn to the order processing. Here’s main.swift which resides in Sources/bru:

import Foundation
import bruModels

let inData = FileHandle.standardInput.readDataToEndOfFile()
let decoder = JSONDecoder()
func format(_ response: String, payload: String) -> String { let result = “{ \”response\” : \”\(response)\”, \”payload\” : \”\(payload)\” }” return result
do {
let order = try decoder.decode(Order.self, from: inputData)
let receipt = order.receipt()
print(format(“success”, payload: receipt.description))
} catch {
print(format(“error”, payload: “In a real app, this would have useful information.”))

This code performs a simple transformation of input to output. It receives an array of items formatted in JSON and uses Swift’s JSONDecoder to deserialize the data. Then it invokes an instance method on the Order class to create the receipt and sends it to the process’s standard output.

Before I can build the program, I need to make two small changes to Package.swift, the package manifest file auto-generated when I ran the package initialization command. I need to add a target for bruModels, and specify this new target as a dependency of the main target. See the full listing ( for the specifics.

Now I compile as follows:

swift build

Next, I can test it on the command line by feeding in some JSON and seeing what I get back. Create a file, order.json with the following contents:

{“items” : [
{ “style” : “naan”, “amount” : 2 },
{ “style” : “rye”, “amount” : 3 },
{ “style” : “croissant”, “amount” : 6 } ]

Now enter the command:

cat order.json | .build/debug/app

Sure enough, the test returns:

{ “response” : “success”, “payload” :
“Receipt for Order on 2018–01–09 21:07:19 +0000
— — — — -
2 NAAN @ 0.87 = 1.74
3 RYE @ 0.62 = 1.86
6 CROISSANT @ 1.23 = 7.38
— — — — -
Total: 10.98“

As I mentioned above, it’s not the prettiest output but it suffices for now.

With one simplistic test succeeding, I declare victory and move on to the next step.

Swift and AWS Lambda

As of this writing (January 2018), Lambda supports JavaScript, Python, Java, C#, and Go. Swift is, noticeably, not on that list. But fear not! Node’s child process module is supported by AWS Lambda and I can use it to have my Lambda function launch and interact with an arbitrary executable.

I will write a short JavaScript function, typically called a shim, which will be invoked as the Lambda function. The shim launches the Swift executable, captures its output, and returns it as the result of the Lambda call.

In the previous section, I had decided to ignore networking and write the Swift program as a command line tool. That decision pays off because that’s exactly what the shim needs!

Here’s the code:

const spawnSync = require(‘child_process’).spawnSync;exports.handler = (event, context, callback) => {
const command = ‘libraries/ld-linux-x86–’;
const childObject = spawnSync(command,
[“—library-path”, “libraries”, “./bru”],
{input: JSON.stringify(event)});
var stdout = childObject.stdout.toString(‘utf8’);
callback(null, stdout);

What does this code do? The child_process library is imported and a function, handler, is exported. Handler is the Lambda function. It starts up a new process and feeds event to it as the new process’s standard input. Handler then captures the process’s standard output, which is returned as the result of the Lambda function.

Now you’ve probably figured out that spawnSync is the Node function that starts up a new process. But you probably expected its command parameter to be the Swift executable. Unfortunately, it’s not quite that simple.

Instead, the child process runs the Linux dynamic linker (usually named, although, in this case, we use the symbolic link’s target, ld-linux-x86– What’s a dynamic linker? I’ll quote from the (Linux) manual page: the linker “finds and loads the shared objects (shared libraries) needed by a program, prepares the program to run, and then runs it.”

But why take this Rube Goldberg approach?


Lambda functions run under a specific AMI, and that AMI doesn’t have a Swift compiler. So, I need to build the executable using a version of Linux that does support the Swift compiler and then I have to arrange for the executable to run when executing on Lambda’s AMI.

To do so, I create a zip file with the Swift executable, the JavaScript shim, and all the necessary dynamic libraries needed to run the Swift code. The zip file is uploaded to AWS Lambda and everything’s good to go.

Except, how does one gather up all the correct dynamic libraries?

That’s where Docker comes in. Docker is not strictly necessary; I could find a spare box running the appropriate OS and compile my Swift code there. But using Docker allows me to do all the development on my Mac laptop.

By selecting an appropriate Docker image, we can easily compile our Swift application in a Linux environment and gather the necessary dynamic libraries. Dockerhub has several Docker images to choose from that support the Swift compiler. I chose one named doctorimpossible/swift4ubuntu. The following commands take us through the necessary steps:

docker run -it -v “$(PWD):/bru” doctorimpossible/swift4ubuntu bash
cd bru
swift build -c release —build-path .build/native
mkdir -p .build/deploy/libraries

What do these commands do?

1. Launch the Docker image, making the bru directory available inside of our Docker container and connecting to our container via a shell.

2. Change to the bru directory.

3. Compile a release version of our application using a specified build location.

4. Create a directory in which to place all the necessary dynamic libraries.

Finally, I need to identify all the dynamic libraries involved in running the Swift executable. So, one last, slightly complicated, incantation will do the trick:

ldd .build/native/release/bru
| grep so
| sed -e ‘/^[^\t]/ d’
| sed -e ‘s/\t//’
| sed -e ‘s/(0.*)//’
| xargs -i% cp % .build/deploy/libraries

First, ldd is run. This utility lists all the dynamic dependencies for the executable listed on its command line (ldd is an acronym for list dynamic dependencies). On Linux, dynamic libraries have the file extension .so. So, to play it safe and guard against potential noisy lines from ldd, its output is piped to grep. Then, in several simple stages, I use the stream editor, sed, to remove all the extra characters from each remaining line so that I am left with a simple file path. Finally, the list of paths is piped into xargs which uses cp to copy the libraries into a specific directory.

(In theory, it should be possible to compile the Swift program statically, thus avoiding the need to find and bundle all the dynamic libraries. However, I have not yet been able to get this to work.)

With all the necessary dynamic libraries collected, all that is needed is to create a zip file with the libraries, the app itself, and the JavaScript shim. So, exit out of Docker (type `exit` at the command prompt), and then:

cd .build/deploy
cp ../native/release/bru .
cat > index.js // paste in the JavaScript code listed above, then hit Control-D
zip -r *

Now, I upload to AWS Lambda and I will be ready to test my Lambda function. If you have the AWS command line interface (CLI) installed, then creating the Lambda function is done as follows:

aws lambda create-function —function-name bru
—runtime nodejs6.10
—role <your-lambda-execution-role>
—handler index.handler
—zip-file fileb://

Replace <your-lambda-execution-role> with an IAM role that has the appropriate permissions to execute a Lambda function.

Testing the new function is easily done using the CLI. Note the file path for the file order.json mentioned above in the example section and then enter the following command:

aws lambda invoke —function-name bru
—invocation-type RequestResponse
—log-type Tail
—payload file://<path-to-order.json>

If everything is successful, you will see a JSON snippet printed to your terminal with fields StatusCode (should be 200) and LogResult. The log result is not that important, but if you are curious, you’ll need to decode it from base64 encoding. The receipt created by your Swift program will be in outputfile.txt.

If you prefer, you can use the AWS web console to create your Lambda function, upload the zip file, and test the function. Since the zip file is fairly large (25M), you may get a warning suggesting that you upload it via Amazon S3, but it will work.

Next Steps

With the Lambda function installed and tested, the next steps are to integrate it into the rest of your infrastructure. The fact that the function is implemented in Swift is mostly irrelevant to the rest of your system. You can put the function behind Amazon’s API Gateway or configure it with any of the standard AWS triggers such as SNS Topics, DynamoDB events, etc.

More importantly, with an understanding of the process behind creating Swift Lambda functions, and an eye towards reducing tedium, you ought to consider ways to automate this process. An existing project to do so is Hexaville. Although I have not used it, I have read through the code and it looks like a solid approach. It has the added advantage of providing Swift libraries for interacting with DynamoDB and making use of OAuth. Another project worth investigating is Apex. Although it does not currently support Swift, Apex enables you to write Lambda functions in Clojure, Rust, and Go and might provide a useful source of inspiration for improvements to the Swift Lambda process.

If you’re sold on serverless, but looking for additional options, keep an eye open for Part 2 where I will discuss using Swift with Apache OpenWhisk and IBM’s Blue Mix.

DISCLOSURE STATEMENT: These opinions are those of the author. Unless noted otherwise in this post, Capital One is not affiliated with, nor is it endorsed by, any of the companies mentioned. All trademarks and other intellectual property used or displayed are the ownership of their respective owners. This article is © 2018 Capital One.




The low down on our high tech from the engineering experts at Capital One. Learn about the solutions, ideas and stories driving our tech transformation.

Recommended from Medium

Building a mad libs program in rust

A short guide to how Python virtual environment works .

Python User Defined Functions to Automate Athena Queries using Boto3

Netgear Router Firmware Update |

Agile is 16 years old — Jen Dallas discusses the merits of this methodology


Bundled Notes 2.1: free recurring reminders, reminders UI overhaul, Pro storage boost + more.

Governance and Roadmap update

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Matthew Burke

Matthew Burke

Mathematician, Software Developer (at Capital One), Go player.

More from Medium

Managing Concurrency With Swift Task Groups

Develop a command-line tool using Swift Concurrency

Graph — Swift

Retriable API Calls with Modern Swift

Retriable API Calls with Modern Swift