Single Page Application Build and Deployment

Brendan
Slalom Technology
Published in
9 min readJan 10, 2017

In my previous post, I talked about how the modern web application development paradigm has changed. I gave some pretty simplistic examples of how projects were setup and structured. The next two posts are going to dive into more details. In these posts I will talk about build and deployment technologies as well as reactive programming. This post will focus on the former, single page application build and deployment models.

I not so nostalgically remember the days of configuring IIS on a windows server to deploy an Angular or Backbone application. It felt like a lot of work to serve static webpages. Luckily, that has changed. The build and deployment tools available today have made single page application build and deployment much more efficient and much easier. In this post, we will take a look at compiling an application with webpack, dependency management with Docker as well as deployment via S3, Cloudfront and NGINX.

Application Builds and Bundling with webpack

There are many tools out there to handle building and bundling. If you look at the Angular 2 documentation, they largely reference a SystemJS approach. They do also have documentation for using webpack and a CommonJS approach. I will cover webpack in this post because I have used it and I like it.

Webpack is a JavaScript bundler. You specify configuration files and webpack parses your files, locates dependencies via the import statements in your application and constructs a dependency graph. The advantage of this model is that you can create bundles for components that contain only the resources required to run that particular page. Just in case you glossed over that sentence, I repeat at runtime only the resources required for a specific page are loaded. Additionally, we can use plugins to minify and consolidate common code.

So how does webpack work? It starts with configuration files. In a configuration file, the entry points and output files are specified. Webpack parses the import statement, builds a dependency graph and bundles the application file(s). It is common practice to segment entry points and outputs. Common segments include App for volatile application source and Vendor for more stable vendor files. In the Angular 2 example, after the output files are created they are inserted into the index.html.

From the base setup, webpack can be customized to bundle applications based on requirements and file types. There are a wide variety of plugins available to handle many common build tasks. This is usually accomplished through Loaders. Common Loaders include CSS or SCSS, Images, Fonts, etc. You can also chain loaders together to perform a sequence of tasks on a particular file.

webpack also supports configuration for environments. A common practice is to have a development, distribution and test configuration for an application with a common configuration from which each environment builds.

For instance, a development build might need to configure a local server, a distribution build might need to minify and uglify files and write the output to a dist folder and a test configuration might need to load and execute spec tests without loading css or html.

As you have seen, webpack is a great tool for bundling and configuring applications for development, production and test environments. It manages dependencies of a JavaScript or TypeScript based application and handles many processing tasks via loaders and plugins. Webpack does not manage external application dependencies. For that, we look to Docker or Kubernetes, container technologies that can be used to manage external application dependencies.

External Dependency Management with Docker

Application developers primarily focus on, application development. Seems pretty obvious, right. Many times this involves creating a local development server and running the application. Grunt, Gulp, webpack and NPM has made the process of creating a local development server pretty trivial for single page applications. However, what happens when an application relies on one external API, what if the application relies on 10 external APIs. The application now has external dependencies that can affect application build and startup.

As any application developer knows, dependencies can be tricky. Any number of factors, such as incorrect API version can lead to wasted time or broken code. So how are dependencies managed? Additionally, there is a lot of overhead that comes with creating shared or individual development environments in the cloud (Even more work to do this on premise). This could be the subject of many DevOps related posts. So, I will narrow this post to covering local dependency management with Docker.

I put together a simple two-container application. The first container is an NGINX container running an Angular 2 application that maps and searches food truck data for the city of Boston. The second container is a Node Express Server, which serves food truck data for the city of Boston. To manage the dependency of the Angular2 application with the Node Express API, I am using Docker.

Docker is a container-based technology that isolates a subset of the file system. With Docker you create images. Images specify operating systems, frameworks and packages required for a specific container. You can create custom images or you can use an image that is registered via the public or private registry. The specifics for a Docker image are contained in a Dockerfile similar to below.

This Dockerfile pulls from a preconfigured image registered in the Docker registry. It copies a new NGINX configuration file that registers the webservers as well as the dist directory for the application to the html directory.

Running the below command executes the Dockerfile script and builds container.

Now that a container has been created, the run command brings up an NGINX server with the configurations specified in the Dockerfile. The –d option runs the container detached and the –P maps the container port to any open port on the host device.

The Angular 2 application can now be accessed through the host mapping port. That was pretty easy. However, the dependencies for the entire application ecosystem are not being managed. For an application with two dependencies, the manual process of building the container is not much overhead. However, what if it is a distributed Microservice system? What if some services have start up dependencies e.g. Redis cache or Database? Manually building each dependency would be time consuming and would require knowledge of the entire system.

Enter Docker Compose. Docker Compose provides a way to specify startup scripts for Docker containers. A docker-compose.yml file specifies how to build the service and the dependencies for the service. Under the hood, Docker structures the startup order to ensure services are started in the correct order to manage dependent services.

Let’s take a closer look at a Docker Compose file.

The docker-compose.yml sets the configuration for each service. In this case the node server and the NGINX server running the Angular application. The containers specifications for each service are contained in the individual Dockerfiles for the service.

For the Node service, the build directory that contains the Dockerfile and the host to container port mapping is specified. In the case of the Angular application, in addition to the build directory and ports, a restart configuration ensures that the server is restarted when the application is deployed. Another important element to note is the depends_on configuration. The depends_on configuration ensures that the applications start up in the correct order. In this case, the Angular application depends on the node server that contains the food truck data.

With the docker-compose build command; Docker builds the images for both the Angular 2 application and the Node Express application.

With the containers for both the Angular 2 application and the NodeJS server built. Executing the docker-compose up command with the detached option runs the Express and the Angular 2 applications, starting the dependencies in the correct order.

Both the Angular 2 and Node Express application are built and mapped to ports on the host machine. We can now with one command build and deploy complex Microservice applications with multiple dependencies.

It is great that we can now run complex applications on our local devices during application development. But how do we deploy applications to an internet facing webserver. We could continue down the path of deploying applications via Docker. However, let’s focus on the deployment of a single page application.

Static Application Deployment with S3, CloudFront/CDN or NGINX

In the past, applications were deployed on IIS, Tomcat, Jetty, Apache, etc. The servers needed to be setup and configured. Cloud environments offer simpler deployment and automation processes.

We will look at the pros and cons of three deployment methods:

  1. Static S3 Hosting on AWS
  2. Amazon Cloudfront on AWS
  3. NGINX

Static Hosting via S3

AWS supports web hosting for static web applications via S3. It is extremely easy to configure and deploy applications in minutes by:

Uploading a distribution folder to S3 > Making files public > Configuring the index.html as a hosted web page.

There is still some tuning that would be needed for a production site, e.g. automating deployment or configuring Route53 to set and resolve domains.

Overall, this is a good approach for hosting small websites or development and test environments.

Pros:

· Extremely easy setup, this process takes minutes, maybe even minute.

· Inexpensive, this is a very low cost option. S3 is marketed as high capacity and low cost

· Simple to automate deployment to S3

Cons:

· Customization is limited. For example, there is not great support for performance enhancing and fine-tuning

· No support for caching or proxying

· Requires CloudFront for SSL support

Amazon CloudFront/CDN:

CloudFront provides finer tuning for single page application deployment. CloudFront and CDN support proxies, caching and SSL. CloudFront also distributes applications from edge locations, which provides a significant performance boon for applications deployed globally.

If a CloudFront distribution does not have a copy of the site assets in an edge location, it pulls the content from the source host (S3 bucket or custom host) at the time of the request.

Pros:

· Allows more customization

· Supports proxy, caching and SSL

· Inexpensive when compared to a custom hosting solution

· If using S3 as base host, simple to automate deployment

Cons:

· More involved than S3

· Does not allow complete control of system resources

NGINX:

NGINX is a HTTP and reverse proxy server. Amazon provides NGINX plus AMIs. NGINX based architecture allows for the most customization of the options I have presented. However, the customization comes at a monetary and setup cost. The network, regions, availability zones, VPCs, load balancers, scaling, etc are all at the discretion of the engineer or architect.

Pros

· Completely customizable to support technical and performance requirements

· High performing if architected correctly

Cons

· More complicated setup compared to S3 and CloudFront

· More expensive

· More difficult to operate

This post covered a lot of ground. The options presented here are by no means the only way to accomplish build, dependency management and deployment processes. In my next post, I will walk through the application development with RxJS extensions.

--

--