A subset of AWS’s plethora of Serverless capabilities
A subset of AWS’s plethora of Serverless capabilities
A subset of AWS’s plethora of Serverless capabilities

AWS as a Framework

Payam Moghaddam
Jan 5 · 10 min read

Don’t call us, we’ll call you — Hollywood Principle

In reflecting on what is Galvanize’s backend framework for developing software, it stood out to me that it’s no longer Rails, Play, or Fastify, but rather AWS itself. After all, what’s a framework? A framework is a set of code that is responsible for calling your business logic based on its defined architecture. And in reflecting on what triggers our company’s business logic, it’s increasingly AWS itself.

  • What calls my AWS Lambda function to execute? AWS.
  • What routes HTTP paths to my Lambda function? AWS.
  • What extracts my log statements into a centralized logging platform? AWS.
  • What extracts metrics from my execution? AWS.
  • What scales my logic as events increase? AWS.

AWS doesn’t sound like an “infrastructure” provider anymore, not even a “platform” provider. It sounds like a framework! And like any other application development framework, if you can understand how this framework is meant to be used, the better you can harness its potential! And that’s when it became clear to me:

AWS is Galvanize’s framework

In fact, in the grand scheme, I suspect it will be the distinguishing trait of an application developer going forward.

  • In 2000s, you’d say you are a .NET or Java developer
  • In 2010s, you’d say you are a Rails or Django developer
  • In 2020s, you’d say you are an AWS developer

It’s a surprising conclusion, especially for people who haven’t been deeply involved in infrastructure development like I personally have, so allow me to lay out my case and hopefully by the end of this post, you’ll both see the legitimacy of AWS being a framework, and the unique potential it has once you fully leverage it.

Evolution of Cloud Providers

In order to understand how AWS became a framework, it’s important to understand the evolution of AWS and infrastructure development.

Evolution of Cloud providers over time
Evolution of Cloud providers over time

In the later 2000s, the primary problem that Amazon set out to solve was providing “flexible” infrastructure for others. Infrastructure provisioning was a painful process, so having a utility based model for infrastructure was both novel and a huge reduction in complexity for enterprises. During this time too, application development frameworks were primarily for enterprises and sponsored by the language creators themselves (e.g. Java and J2EE, and C# and ASP). Application development and infrastructure development were entirely separate problems being solved, and no intersecting roles existed that addressed both.

In the 2010s, AWS had matured and contestants such as Ruby on Rails emerged. AWS started to focus on providing platform capabilities and application developers architected their applications with some of these capabilities in mind (e.g. S3 for storage). During this time period, application development was still more oriented around frameworks such as Rails than it was around the cloud providers. Cloud providers accelerated development, but they were not central to development. This thought process fuelled automating infrastructure setup with e.g. Chef, Terraform, etc. and standardizing application deployment on such infrastructure using Docker. Essentially, we took the complexities we had in 2000s and abstracted it away using Convention over Configuration (Rails philosophy), and automation and standardization (Chef, Terraform, and Docker). Application development however was still fundamentally bound to the language framework picked.

Today in the 2020s, cloud providers have come up the technology stack enough to the point of tackling traditional application framework concerns. After all, with infrastructure provisioning becoming a commodity, and platform capabilities plentiful in the market, how can a cloud provider uniquely differentiate themselves? This is where AWS Lambda and its ecosystem steps in. Rather than depending on language-centric frameworks to support AWS, AWS isolated their customers unique value propositions — their business logic — and created an ecosystem that made it easy to execute such logic while addressing numerous non-functional requirements such as scaling, logging, monitoring, patching, etc. After all, what makes your company unique to your customers?

  • Is it how you scale? No.
  • Is it how you log? Nope.
  • Is it your patching strategy? Hell no.

Customers do not care or know what happens behind the scenes. They care about your value proposition which is inside your business logic. In fact, if anything, customers probably wish you didn’t waste time on non-business logic activities so you can provide more direct value instead! This is why AWS is so focused on its Serverless capabilities.

AWS is focused on being your framework for software development.

And if you wholly commit to AWS’s vision, you get unique privileges you can’t get with typical language-centric frameworks! These include:

Automatic Scaling

If there is no event taking place, there is nothing to compute, and thus nothing for you to pay. This utility-based model for business logic execution liberates companies, big or small, to pay for only what they use.

Tight and automatic scaling of AWS Lambda
Tight and automatic scaling of AWS Lambda

Furthermore, it leads to lighter designs as it focuses you on your core business domain rather than non-functional concerns. You just need to configure AWS’s services correctly and then let AWS take care of the scaling, permissions, logging, etc.

Simpler Security

AWS’s Serverless capabilities are regularly patched and uphold a high compliance standard. If you build with such Serverless capabilities, you can avoid a ton of “Serverful” management overhead.

Infrastructure models and their total cost of ownership and maintenance difference.
Infrastructure models and their total cost of ownership and maintenance difference.
Infrastructure models and their total cost of ownership and maintenance difference.

And with IAM being the backbone of all these services, you can integrate your business logic with these services with tight permission boundaries and without static credentials. For example, you can limit what can call your Lambda function, and even automatically create an audit trail when your Lambda function is called, with tiny amounts of configuration. That’s a ton of security for very little effort.

Integrated Observability

If you build on AWS directly, it is far easier to then just turn on AWS X-Ray, look at AWS CloudWatch Metrics, and AWS CloudWatch Logs, to infer the internal state of your application. In comparison, what does a default Rails application give you? Logging. Even that needs to be piped into a central logging tool like CloudWatch Logs separately, or into an ELK stack which itself is a ton of infrastructure to manage and provision. Once you’ve done that, you still have to figure out how to capture meaningful metrics, and integrate tracing into your Rails application.

Pillars of Observability in AWS
Pillars of Observability in AWS
Extracted from Serverless Observability Presentation

Plethora of Capabilities

AWS has 170+ capabilities you can use to create solutions for your customers. Let’s look at a few that you’d have to solve separately with a typical Rails framework.

  1. Scheduled Tasks (e.g. cron) — using AWS, you just need to configure a CloudWatch Event to trigger your Lambda on a pre-defined schedule. In comparison, there is nothing robust and distributed for Rails by default. You can’t rely on a single machine’s cron neither, so you end up implementing a complex solution for a very simple problem.
  2. Durable Storage (e.g. S3) — using AWS, it’s super easy to store files in their key-value storage of S3 and even have business logic triggered on file uploads! What is the default of any language-centric framework? Local storage. 😟
  3. Async Tasks — using AWS, you can connect an SQS to a Lambda to trigger whenever there is an event. Without AWS, you have to setup up dedicated infrastructure for async task storage and processing, and separately manage their scaling as well.
Various input sources for AWS Lambda
Various input sources for AWS Lambda
An incredible number of event types can trigger your business logic with AWS Lambda (image is from Serverless Applications with Go). This diagram is showing only a subset! 🤯

Language Agnostic

Unlike application frameworks that are inherently bound to a single language, when you use AWS, you can use the language best suited for your problem. Is one problem easier to solve in Go? Go ahead. (Ba dum tsss 🥁) Is it easier to script with Node? No(de) problem. 🥁

At Galvanize we use TypeScript as our default language for building front-end SPAs and backend services. Its gradual typing support provides us with the type safety level we need to scale, and its performance and concurrency levels are sufficient for most of our business needs. However, if a better language and ecosystem is suited for our problem, we’ll use that (e.g. Python and ML).

Polyglot opportunity based on problem to be solved
Polyglot opportunity based on problem to be solved

How many application frameworks can you think of that give you such flexibility and such rich capabilities? None.

AWS as a framework is fundamentally about scaling your ability to focus on your core business problem.

That’s a pretty bold statement. Even if you buy into it though, you may not be convinced that it is more productive to develop with compared to a traditional application framework, nor is it capable to address all business problems, such as those that must be without cold-starts. If the year was 2018, I’d have agreed with you; however in 2020, these core problems have been addressed. So let’s tackle these two points next.

Productivity with Terraform

In May 2019, a wonderful thing happened, Terraform 0.12 was released! Terraform is a configuration language that allows you to define your infrastructure as code. Unfortunately, prior to Terraform 0.12 the language was quite primitive. It lacked loops, had limited variables capability, etc. It led to many people creating solutions that “wrapped” Terraform in order to make it easy. In Galvanize, we had built an internal tool with Ruby that allowed us to generate Terraform, thus overcoming the lack of loops and complex modularization. Fortunately, with version 0.12, our internal tool was no longer required. Terraform was finally enough on its own to configure our complex infrastructure.

To demonstrate this, let’s look at a simple “email publisher” service built with Rails vs. built with AWS. What we want to consider here is the total cost to build this; not just the Ruby portion.

Total number of steps to process background tasks with Rails vs. AWS
Total number of steps to process background tasks with Rails vs. AWS

In fact, the AWS portion is easy enough that this Gist basically provides the entire implementation! And keep in mind that this AWS implementation has no servers, no patching, baked in monitoring and logging, auto-scaling, and incredibly low costs.

That to me is being productive.

And what if this pattern repeats or you need many of such AWS Lambdas? With Terraform, you can create a module that abstracts this configuration and makes it easy to reuse. You now have a solution that lets your team remain consistent and entirely Serverless. Terraform is the tool that makes AWS as a framework productive.

Cold Starts

Inevitably an opponent of building directly on AWS will counter with saying that using AWS as a framework will involve using AWS Lambda, and AWS Lambda has its weak point of cold starts; thus, they cannot use it. This fortunately is increasingly irrelevant. It became irrelevant due to a few advancements:

  • In late 2019, AWS significantly reduced the cold start of a Lambda in a VPC. Previously, it would take up double-digit seconds for a Lambda to boot-up in a VPC; however, boot-up times are now comparable to non-VPC Lambdas.
  • In Dec 2019, AWS also introduced Provisioned Concurrency which allows your Lambda containers to be warm given a monthly cost. While it diminishes the utility pricing advantage of Lambda, it still retains all other Serverless characteristics that gives us productivity.
  • Lastly, AWS is consistently improving boot-up time. A simple Node.js script can boot-up in ~200–300 ms from cold. That’s pretty fast for most people’s needs, especially when you consider cold start being an uncommon event for an application with consistent workloads.

These advancements significantly reduced the impact of cold starts, but it didn’t eliminate them. Fortunately, with a bit of design and a touch of Terraform, you can abstract this complexity away and have a fallback measure for when you truly need a Serverful approach.

Abstracting the Execution Model

AWS Fargate provides you a containerization solution without any cold-start concerns. Fundamentally, AWS Fargate and AWS Lambda run an application you package for it. If we can focus on creating a common application package model (e.g. a zip) for both, we can create a Terraform module that seamlessly swaps from Lambda to ECS. Consider these two code snippets:

module lambda takes your S3 zip and uses it to create a Node.js Lambda function. module ecs takes your S3 zip and uses it to create an ECS Fargate Service for running your Node.js application! It looks similar, but it’s entirely differently. It unfortunately means you’ll need pre-configured Docker Images for such scenarios, and be limited on your integration options, but at least you now have a fall back if cold starts are absolutely problematic for you.

With a bit of abstraction, you can use AWS as a framework without limiting yourself.

Conclusion

It can be a challenging proposition to view AWS as a framework. For many application developers, it would not be an intuitive choice. After all, for so many years, AWS was just “infrastructure” and that was always “operations” people’s concern, right? However, if we take a step back and examine it holistically, putting first what business we are in and what customer problems we are trying to solve, it is incredibly hard to overlook AWS being the core accelerant to building software faster and in a more reliable and resilient fashion.

At its core too, it asks us to look inside ourselves and ask what really matters to us personally. Are we professionals that identify ourselves with a language or a framework and how we wield it to build solutions, or are we problem solvers seeking the best way to address customer needs and our business problems? At Galvanize, I want to be focused on reducing corruption and having our customers govern more effectively; thus reducing malpractice and abuse both at a corporate and government level. That’s impactful work! That’s the sort of problem I want to solve! And if AWS lets me solve it faster and simpler, then it’s going to be my framework.

If you identify yourself as a problem solver too, then maybe its time for you to start using AWS as a framework as well.

Build Galvanize

A window to the product, design, and engineering teams at…

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store