A Containerized Web App in AWS with Serverless Microservices

The best way for an architect to guide an organization in its technology selection and solution designs is to obtain hands-on prototyping experience. This is a particularly important practice when there is no established precedent of wide-scale industry adoption for a technology or industry offers several viable technologies from which an organization must select. For building modern web applications, Route 53, Certificate Manager, Nginx, Angular, API Gateway, Lambda, and Dynamo DB are but one of a reasonable set of services for building serverless web applciations in AWS that make a good case study for discussing architecture, service usage best practices, and technology maturity generally. This blogpost walks-through a prototyped application built from those technologies, all-the-while making technical observations.

In addition to application functionality, the prototype included developing two continuous-integration and continuous-delivery (CICD) pipelines, one for the web application and the other for the build and deploy of Lambda functions. Amazon Developer Tools used include Code Pipeline, Code Build, and Code Deploy, with pipeline execution initiated via GitHub webhooks. Cloud Formation was also used to manage the deployment of AWS service infrastructure, such as API Gateway routes and Dynamo DB table setup. A detailed review of these solutions are shared in CICD for a Containerized Web App in AWS and CICD for AWS Serverless Microservice Implementations, forthcoming. In summary, the web application pipeline builds the Angular project hosted in Git into a Docker image stored in Amazon Elastic Container Registry (ECR), and then deploys the container into a cluster using Elastic Container Service (ECS) via the EC2 launch type. The serverless pipeline pulls Java Lambda code from Git, processes Java annotations on and builds the lambda function classes, dynamically generates Cloud Formation templates and the application deployment specification for Lambda functions, and finally deploys the Lambda functions with gradual traffic cutover to the new/updated Lambda functions.

SAAS Web Application Frontend

A primary goal of the prototype is to support a software as a service (SAAS) model where individual instances of the end user web application and underlying platform are deployed in AWS and run on behalf of customers.

Choice of the user interface framework and/or libraries used to build the frontend were not really a focus of the prototype. Many folks have written opinions on Angular versus React versus Vue and others so check out those articles. The primary rationale for selecting Angular is highly specific to my own needs, chiefly:

  • Initial coding of a full-featured user interface by an engineer (me) that prefers the speed of development achieved from having to learn only one framework as opposed to a preference for technology flexibility but additional effort to build from a cobbled together collection of Javascript libraries.
  • Support for eventual large-scale development by teams of engineers where object-orientation and strong typing help ensure the web application has enterprise-class quality and a maintainable codebase.
  • Basic requirements generally met by all viable alternatives including full feature set, established industry use, high quality, active developer community, etc.

The web application uses as its base the SmartAdmin dashboard template purchased from Wrap Bootstrap. For $35 you get all of the scaffolding needed for a complex dashboard application and many user iterface widget examples, all of which are available in the technology stack of your choosing (Angular, React, HTML, PHP, others). The template for Angular offers three starter projects for building a dashboard application: Full is configured as a concrete implementation showcasing all of the user interface bells-and-whistles. Blank sets-up the menus and other foundational parts of the application. Lite is similar to Blank but minimizes the number of third party Javascript libraries.

As a starting point for development, the picture below shows the web application frontend prototype built from the Blank version of the SmartAdmin Angular project. The custom part of the user interface is the Toolbox menu item and the corresponding form found in the main content area. The Javascript and HTML code is shared in Appendix 1 at the end of this article. The application was written using code snippets from articles, Git and Stack Overflow. Angular offers many alternatives for HTML-to-Javascript data bindings as well as alternatives for implementing dynamic behavior including responding to user interface events. Finding good code samples was non-trivial, foreshadowing the experience of a developer new to Angular and not yet with a toolbox of his/her own examples to reuse.

Skeleton Web Application User Interface Prototype

The Angular application is built from and deployed locally using Node. The template instructions were incomplete and did not include the requisite .angular-cli.json configuration file. Here are the steps for getting the application built and deployed, done within its root directory:

npm install
npm install -g @angular/cl
// add an .angular-cli.json
npm run build
ng build
ng serve
// surf to localhost:4200

For the purposes of the prototype, all the web application really needs to do is make an API call with some JSON data. The web application was built from a dashboard template because it is eventually meant to serve as a foundation for building a sophisticated software development tool, the first product of a start-up. The form shown in the picture above is for storing metadata for the APIs from which the application is built — some of the tool functionality is used to design the tool itself but is not the main focus of the application’s feature set. The user fills out a form to describe a new API and then clicks submit to save the information.

Enabling Custom Web Application Frontend Development

Part and parcel of offering a SAAS solution these days is providing customers with access to its underlying functionality via a high quality portfolio of REST APIs that enable customer automation use cases while also allowing them to build their own application experiences. Principally then, SAAS is multi-product in nature — (Use Case 1) the primary web application purchased from the provider and (Use Case 2) the other applications developed by the customer. The deployment architecture diagram below depicts this client-side story. For Use Case 1, an Application End User interacts with the primary Angular Web Application. For Use Case 2, a customer developed application labeled Third Party Platform API Client is used by a Third Party End User.

Deployment Architecture

Deployment Architecture for a Container-Based Web Application with Serverless APIs and a NoSQL Database

Now let’s describe each part of the solution, focusing on the End User using the Angular Web Application (Use Case 1) flow, walking sequentially through the deployment architecture from the client layer all the way down to the persistence layer.

End User Request of Web Application Functionality. The End User uses a web browser to request and interact with the Web Application over HTTPS.

DNS Resolve of Web Application Pages and API Endpoints. The Web Application endpoints are all from a single domain that is managed by Route 53. Original purchase of the domain was done from my GoDaddy account some time ago. Domain transfer support by Route 53 made reallocating ownership fast and painless. Route 53 is used to define a subdomain for exposing API endpoints.

Secure the Web Application and Platform API Traffic Channels. The Web Application and Third Party Platform API Clients send their requests over HTTPS using TLS 1.2 and server-side authentication. Server-side certificates were issued by the AWS Certificate Manager (ACM), one per region as is required. The domain ownership verification process was straightforward since the domain was under the control of Route 53 for the AWS implementation account — all that was necessary was to add a record set to the domain with metadata that was automatically verified by ACM, after which it generated the certificate. The AWS console provides full support for this workflow. ACM is integrated with several other AWS services, making certificate sharing easy generally and particularly when configuring the API Gateway to receive HTTPS requests.

Optimize Web Application and API Performance with Latency-Based Routing. Route 53 is configured for routing application requests to the web server in the region with current latency that is lowest for traffic comming from the location of the client. The initial implementation routes traffic to either US-WEST-2 or US-EAST-2 using an active-active pattern. This is also done for all incoming API invocations.

Recover from a Region-Level Outage. Route 53 and Lambda are used to detect and respond to region-level failures. Route 53 periodically calls region-level health checks for US-WEST-2 and US-EAST-2 that are implemented via Lambda functions. For the prototype, these Lambda functions are stubs that always return an HTTP status code of 200 (healthy), deferring the work to devise the right set of system states to consider for computing when to failover to an alternate region. Different health checks are used for web application traffic and API traffic since the web layer with EC2 instances is subject to failure but this layer can be unhealthy while the underlying API layer remains healthy.

Receive Requests from the Internet. A Virtual Private Cloud (VPC) provides a secure network boundary for web application hosting. It is composed of a public subnet with an Internet Gateway that receives traffic from anywhere in the world and a Bastion host for administrative accesss to EC2 web application hosts deployed in the private subnet.

Load Balance Web Application Requests. The Application Load Balancer (ALB) is in 3 public subnets to make it highly available and able to receive traffic from the Internet Gateway. The Internet Gateway forwards inbound Internet traffic to the ALB that fronts a cluster of web application servers spread across three availability zones. To determine which server receives the request, the load balancer uses the standard “least outstanding requests” algorithm. The load balancer terminates the TLS connection and forwards requests to the web server layer over HTTP.

Secure Access to the VPC Resources. Though not shown in the deployment architecture diagram, three security groups and two route tables are used to regulate the inbound and outbound traffic of resources. Security groups and route tables follow well-established security and deployment architecture best practices as follows: The Public security group permits the load balancer to receive traffic from anywhere and send traffic anywhere. The Bastion security group allows the Bastion host to receive inbound SSH traffic from anywhere while permitting outbound SSH traffic to hosts (web servers) protected by the Private security group. The Private security group only allows inbound traffic from resources in the Public security group. The Public route table enables the VPC to receive Internet traffic via the Internet Gateway. The Private route table enables instances in the private subnet to send outbound traffic through the NAT Gateway.

Containerized Web Server Cluster Receives Request. Web application pages are hosted on a cluster of containerized Nginx servers using the Elastic Container Service (ECS) and the EC2 launch type. For this launch type, customers are responsible for building and managing the VPC resources needed to host Nginx, unlike the Fargate launch type where Amazon builds and manages VPC resources on behalf of customers. Like Lambda, Fargate offers serverless compute (Amazon managed EC2 instances) but where functionality is encapsulated within a Docker container and run as opposed to a Lambda function that typically encapsulates only a snippets-worth of application or system code. While at first glance one might assume that Fargate should be preferred over the EC2 launch type to offload as much work to the Amazon infrastructure as possible, Fargate places limitations on the ability to customize the container hosting infrastructure and the sophistication of algorithms used to manage and place containers. As is true with other Amazon services that abstract away complexity to make building software easier for new and/or small-sized adopters of AWS, Fargate will likely not be the right choice for even moderately sized enterprises with even moderately complex runtime container management and optimization needs.

Conceptual Design for ECS EC2 Launch Type Deployments

For container deployments using the EC2 launch type, the easiest way to understand the solution offered by Amazon is to consider it as having two distinct parts, the (1) networking and compute hosting infrastructure contained within a VPC and (2) the container deployment and management capabilities offered by ECS. The diagram above is meant to be an easy-to-understand conceptual model for the EC2 launch type, it is not a strict technical model and it strategically omits Amazon resources that are part of the solution. When starting from scratch, a good approach to development is to first build the VPC manually, test end-to-end connectivity, and then use Amazon’s Cloud Former to generate the Cloud Formation template to serve as a baseline for instantiating the VPC. For the ECS part of the solution, use the AWS console to create an ECS cluster whose configuration reuses the requisite VPC resources.

In addition to the VPC topology previously described, the prototype includes autoscaling for web server instances that ensures there is always one host running in each availability zone at any moment in time. This autoscaling strategy is a placeholder for eventually using Cloud Watch runtime execution events to trigger policy-based scaling. Note that for the EC2 launch type that horizontal scaling is supported at two levels, (1) EC2 instance scaling, (2) ECS service scaling, the later of which is discussed shortly.

VPC configuration is mostly but not completely agnostic to container-based deployments. EC2 instances must be granted permission to interact with ECS via an IAM role. EC2 instances must have an ECS container agent running on them at all times. This can be accomplished by building instances using Amazon-provided ECS-optimized AMIs. Alternatively, the agent can be installed during instance boot-up from any other AMI. Lastly, the EC2 instances need to be populated with metadata that tells agents which ECS cluster the EC2 instances belong. Setting the cluster name is easy to do with a line of user data script:

#!/bin/bash
echo "ECS_CLUSTER=web-app-cluster" >> /etc/ecs/ecs.config

The AWS console wizard for building an ECS cluster provides options for creating a new VPC or using one that already exists. In either case, a cluster is created that has a one-to-one relationship with a VPC, affectively associating the target computing infrastructure for use to deploy containers. A cluster can host containers deployed using either or both of the EC2 and Fargate launch types.

When creating a cluster using an existing VPC, it is only necessary to tell the cluster which subnets to use and the security group used to regulate access to the EC2 instances. With the VPC and its instances already running, ECS creates the cluster and uses those instances — it is smart enough to not spin-up a second set of instances based on information collected by the wizard.

Starting with a Docker image and working our way back to the concept of a cluster, ECS integrates with several Docker image repositories. For this prototype, the web application Docker image is uploaded to the Amazon Elastic Container Registry (ECR), as part of the automated CICD process, for use by ECS. Inevitably what gets deployed on EC2 hosts are containers where a container is the running instance of a Docker image.

Container deployment and also management, aka container orchestration, is handled by ECS using three key constructs, a task definition, task, and service.

A task definition is the blueprint that determines which containers to deploy and how to deploy them. The definition provides the deployment details for up to 10 different containers that should be functionally related and typically share runtime dependencies with one-another where it is wise to deploy and manage the lifecycles of these containers as a single unit. Among the details found in a definition include the Docker image location(s), container CPU and memory requirements, inter-container networking configuration and port mappings, container persistent storage needs, execution permissions, failover policy, etc.

For the prototype, a minimal task definition, shared in Appendix 2, is all that ECS needs to deploy the container that runs the Nginx web application server. Have a look at that appendix to get a more full appreciation of what a task definition can specify. Though not used for the prototype, task placement strategies and task placement constraints will likely be used in future versions of the prototype to provide more intelligent task placement onto EC2 instances. A strategy instructs ECS to allocate tasks to optimize system qualities including CPU utilization, memory utilization, and uptime (availability) while also offering the flexibility to determine task placement based on the custom attributes of EC2 instances set during boot-up. Constraints tell ECS what qualities the EC2 instance must have for it to run a task. Tasks are only placed on instances with these qualities.

A task is the instantiation of a task definition that results in the deployment of containers onto EC2 instances. ECS performs container orchestration through the management of running tasks and also through the use of services.

An ECS service has a one-to-one relationship with a task definition and offers two features: It ensures that the right number of tasks remain running in the case of failure and during task redeployment, and it scales up or down the number of tasks as set forth in the service configuration. If neither of these features is needed, a task can be deployed directly into an ECS cluster without the use of a service, although for production workloads, this is mostly likely not a good practice. Within a cluster, any number of services may run.

Returning to how the containerized web server cluster receives requests, ECS handles dynamic port mappings for EC2 instances in the cluster. It dynamically adds listeners to the ALB, mapping an EC2 port to the container port. Without dynamic port mappings, only one instance of a given container could run on an EC2 instance, using the one and only port defined by the task definition. Configuring for dynamic port mappings is as easy as setting the host port attribute of the task definition to 0. The ALB forwards along traffic to the container port where Nginx is waiting to service traffic.

Nginx Serves Static Web Pages Containing Javascript Calls to APIs. All of the web application pages are static to avoid server-side computation of HTML in realtime as a way to improve execution efficiency and scalability. Instead, pages are composed of Javascript that make calls to the same APIs used to build customized third party application experiences. One can correctly argue that this architecture approach could lead to chatty client interactions, especially for experiences that require multiple API calls. The remedy for this problem is to offer an API query language (GraphQL) for use to code web pages, the plan for a future version of this prototype. Use of a query language should also have the added benefit of further isolating the client-side from API changes.

Web Application Page Renders Initiating API Platform Calls as Needed. For the prototype, the web application page is rendered in the browser and awaits user input to fill in the form that collects API data. On form submission, a platform API call is made over HTTPS to persist this input where Route 53 receives the request and forwards it to a healthy instance of API Gateway.

API Gateway Forwards the Request to the API Implementation, a Lambda Function. API Gateway is used to make REST API endpoints available for consumption. It has nice integration with ACM, making the provisioning of certificates and configuration of channel level encryption easy. The AWS console was used to define API endpoints (resource path + HTTP method) with the corresponding Cloud Formation template still under development at the time of writing this article.

The architecture approach is to minimize the amount of configuration and business logic placed within the API Gateway. This includes not using the embedded Apache Velocity feature for request and response transformations, an approach reminiscent of the days of SOA ESBs. Instead, Lambda proxy integration is used to have API Gateway forward incoming HTTPS requests to Lambda functions containing the API implemention where the event it receives is a JSON object with HTTPS request headers and body translated automatically.

API Gateway has embedded support Swagger. Although startlingly common in industry, Swagger should not be used as the primary source of API contract documentation, instead its use should be limited to test API invocations against a sandbox environment. Best practices for API contract documentation, tooling, and an API first design process are deferred to a future blog post.

Invoke the Lambda Function-Based API Implementation. The primary use case that Lambda functions were designed for was to provide developers with a convenient way to run small portions of code in response to system and application events generated at runtime where standing-up a full runtime environment for executing code would be relatively expensive. The application of Lambda functions has grown over time, fueled by industry enthusiasm for serverless computing.

As has been explained, API Gateway is well-integrated with Lambda. It initiates function execution, passing the HTTPS request as a Lambda event. It’s important to point out that building API implementations from Lambda functions in this way means losing first-class coding support that would otherwise be available if using a JAX-RS compliant object-oriented framework (or similar) whose feature set is rich and dedicated to API integration layer coding. Amazon labs provides some code examples available in Git that have a Lambda function spin-up a JAX-RS or Spring Boot Tomcat container, presumably so that API implementations can gain access to the features of these frameworks. It strikes me as a hack for a Lambda function to run a container and I worry about it having negative lifecycle consequences that impact system execution, such as even less responsive first time execution of Lambda functions.

Rather than coding one-off lambda functions for each API implementation, the prototype provides a lightweight object-oriented framework for building Dynamo DB-backed Lambda functions for REST entity APIs. This framework enables reuse across API implementations while also speeding-up development. This is possible first because API implementations have the same need for code that handles basic marshaling and un-marshaling of data as it proceeds through API Gateway to Lambda to Dynamo and the corresponding return path. Secondly, by standardizing the behavior of entity APIs, foundational code to create, read, update, and delete resource state can be reused by API implementations.

Lightweight OO Framework for Building REST Entity APIs from API Gateway, Lambda, and Dynamo DB

The diagram above spilts the codebase into the Platform OO Framework that serves as the foundation for building entity APIs and the OO Framework Extensions that encapsulate API-specific code. The CreateAPILambda and CreateAPI classes are an example of how to use the framework, collectively having the responsibility to persist the API data collected by the prototyped web application form. For details on the implementation, Appendix 3 includes this code and also the framework code. Here is a summary of responsibilities for each class:

  • LambdaAPIProxy. This is the base class of the Lambda function that orchestrates the API implementation. The handleRequest method contains all of the marshaling and un-marshaling code needed to serve as a bridge between API Gatway and Dynamo DB — it receives the incoming API request, forwards this to a class implementing the API interface, receives the response from that class, and then forwards the response back to API Gateway, performing transformations as needed.
  • CreateAPILambda. The primary purpose of this class is to be a concrete class of the LambdaAPIProxy that determines the specific API implementation class (CreateAPI) to call when the Lambda function runs. Note that this class also provides the ability to specify Lambda deployment configuration via custom Java annotations, a capability discussed more in CICD for AWS Serverless Microservice Implementations.
  • API. This interface provides an abstraction between Lambda and the API implementation code with API behavior placed within its invoke method. The invoke method takes as input a Request object and returns a JSONObjectResponse. The idea is to make it easier to evolve away from the use of Lambda if needed by not placing this code directly in the lambda function.
  • Request. This class is used to pass in all of the information found in the HTTP request.
  • JSONObjectResponse. This class contains the HTTP status code, headers, and body for return to the client.
  • CreateAPI. The LambdaAPIProxy calls the invoke method of CreateAPI. CreateAPI is a subclass of DynamoCreateAPI, which is a subclass of DynamoRESTAPI, where each level of the class hierarchy encapsulates reusable code. CreateAPI sets the resource ID name used when persisting its state and also the name of the Dynamo DB table. It delegates the responsibility to do the actual persisting to its superclass DynamoCreateAPI.
  • DynamoCreateAPI. This class encapsulates the code that persists resource state via a Java SDK call that inserts a record in Dynamo DB.
  • DynamoRESTAPI. This class encapsulates the code to connect to Dynamo DB via the Java SDK and also to build the JSONObjectResponse.
  • Dynamo*API. Not used for the prototype, these classes encapsulate foundational code for implementing the remaining entity API types, following a similar subclassing pattern as DynamoCreateAPI.

Summarizing, this prototype has experimented with the use of Lambda for an enterprise-scale portfolio of REST APIs. It’s my opinion that a large enterprise should not rely so heavily on Lambda until there is a full ecosystem of development and management tools — tempered adoption is my current recommendation. I detail out a set of use cases where Lambda is appropriate and another set where it is inappropriate in Judicious Adoption of the AWS Lambda Serverless Architecture, forthcoming.

Persist API Resource State Using Dynamo DB. Covering use of Dynamo DB in more detail, its first class support for storing JSON documents, low latency, and high scalability, makes a great choice for persisting REST API resource state. The Cloud Formation template for Dynamo DB, found in Appendix 4, creates and configures tables including setting up read/write capacity, partition (hash) key, sort (range) key, and global secondary indexes. Each instance of a resource that needs to be persisted is stored as an item in a table for that type of resource (e.g. API in the prototype). The primary key is a composite of the partition key, a unique ID generated by Platform OO Framework code, and a resource instance integer that is incremented whenever a modification of the item occurs. The global secondary indexes are not used by the prototype but are the starting point for supporting search use cases against resource attributes other than the primary key, e.g. the visibility of an API. Finally, global tables that provide cross-region data replication were configured via the AWS console since this cannot yet be done using Cloud Formation.

The Lambda Function API Implementation Finishes Execution. Once Dynamo DB successfully creates the new item to store the API resource state, all that’s left to do is return a JSONObjectResponse with status code set to 200 and the location header set to the URL of the newly created resource. The LambdaAPIProxy base class takes care of translating the response object into the format needed by API Gateway.

API Gateway Returns the HTTPS Response. API Gateway receives the response of the Lambda function call, completes pass through Lambda proxy integration, and returns the HTTPS response to the web application client.

Appendix 1: Angular Web Application Code

Appendix 2: Task Definition

Appendix 3: API Implementation Code

Appendix 4: Dynamo DB Cloud Formation Template