An updated guide to building up a modern Web application stack

This is an updated edition of my earlier guide, published in December, 2019 on DEV.to.

The code is also up on Github: aisrael/elixir-phoenix-typescript-react

While the essential approach has remained unchanged, somebody filed a ticket on my repo to upgrade it to Phoenix 1.5.

Rather than take the earlier codebase and upgrade it Phoenix 1.5 (which I’ve managed to do for a couple of other projects), I decided instead to update the original article for 2020.

Let’s jump right back in.

Prerequisites

This guide assumes you already have the following set up:

  • Elixir (1.10.3), which means you’ll also need
  • Erlang (23.0.2)
  • npm


Comparing a Sieve of Eratosthenes in Rust vs. Go.

After seeing 8F3E’s article, How Fast Is Golang? which compares the performance of Python vs. Go using a Sieve of Eratosthenes as a microbenchmark, I thought it might be fun to see how Rust stacks up.

So, as an exercise to flex my Rust muscles before taking on a much harder challenge, I decided to fork 8F3E’s repo into my own: https://github.com/aisrael/sieve-of-eratosthenes

I then ported the Go code to Rust code, almost verbatim:

    let mut primes: Vec<i32> = Vec::new();    for i in 2..=n {
primes.push(i);
}
{
let mut i = 0;
while i < primes.len() {
let factor = primes[i];
sieve(&mut primes, factor);
i += 1;
}…


In this series of articles, I attempt to demystify and progress from Rust closures, to futures, and then eventually to async-await.

If you’ve been following along, in Part 1: Closures, we first recollected our understanding of Rust closures.

Then in Part 2: Futures, we flexed our understanding of the Tokio runtime, the Future trait and the futures crate.

The Async you’ve been Awaiting for

Let’s pretend we don’t know anything and jump right in and try to write our first async function. To do this, we simply prepend the async keyword to our fn declaration:

async fn async_hello() {
debug!("Hello, asynchronously!");
}

That’s all it takes, really! …


In this series of articles, I attempt to demystify and walk you from Rust closures, to futures, and then eventually to async-await.

This picks up where we left off in Part 1: Closures

Tokio Drift

Why are closures so important to understand before Futures and async/await?

Because for the most part, whether using the futures crate or async/await an implementation of the std::future::Future trait is merely another struct that wraps a closure!

Before we continue, however, it’s important to note some things.

Other languages with support for asynchronous programming typically ship with a runtime built-in to their interpreter or virtual machine. …


The release of async-await in Rust 1.39.0 in November 2019 gave it added traction as a modern systems programming language and made it easier to write highly-concurrent services with it.

Now to fully understand and appreciate how async-await came to be, how to use it, and in particular, how to migrate ‘legacy’ code that used Futures to async-await, I felt that I had to take a step back all the way to closures and work my way forward from there.

This series of articles and the accompanying source code at https://github.com/aisrael/rust-closures-futures-async-await chronicles just that.

Hello, Rust

Let’s start with a brand new Rust…


Image for post
Image for post
Arboric ABAC configuration

When we first started conceptualizing Arboric, the GraphQL API gateway we initially thought we could do enough with just Role-Based Access Control or RBAC.

For example, while a client or API caller with a "user" role might be able to query for accounts, they might not be allowed to execute the suspendAccount mutation. Perhaps only a user with the "manager" role should be able to execute the suspendAccount mutation.

This is typical Role-Based Access Control–a user can assume one or more roles, and each role is associated with a set of permissions or authorized operations. If a user attempts to execute an operation, the system checks if any of the user’s roles has that permission bit set. …


Image for post
Image for post
Arboric GraphQL API Gateway

In case you haven’t heard, GraphQL is the new Web service API standard that’s rapidly gaining adoption and popularity.

Developers like it because it makes it easier to prototype, develop, consume and maintain APIs whether for React SPA/PWA apps or Flutter mobile apps.

I particularly like how GraphQL frees us from CRUD (or HTTP REST verbs) thinking and aligns extremely well with Domain Driven Design (DDD), Command Query Responsibility Segregation (CQRS) and Event Sourcing, making it easier to design and implement APIs as distributed microservices even for complex domains.

API Managers

If you’ve been working on large-scale APIs for some time now, you’re sure to have encountered, likely even used an API manager for your REST APIs. …


Image for post
Image for post

In Part 1 of this series, we wrote and packaged a simple “Hello, world!” command line app written in five different languages, mainly as a base case for building the smallest executable Docker image for each language.

This time we move on to a more “real world” scenario, and a common use of Docker containers: building APIs or “microservices”.

We’re going to write a simple HTTP Web service that responds to a GET /hello?who=world with a JSON response that follows the JSON API standard:

{
"data": {
"greeting": "Hello, world!"
}
}

This allows us to flex each language and platform a little bit more compared to our previous round: we’ll be running an HTTP server, handling query parameters, and returning JSON responses. Rather than building our JSON responses as strings by hand, we’ll try to use each platform’s native JSON facilities or a popular library. …


Image for post
Image for post
Docker all the things!

At Shore Suite, our Ruby on Rails front-end is largely React-based, interfacing directly with an API built in Ruby Sinatra + Grape. Recently, we’ve begun to feel the “bloat” in our stack — case in point, our main front end Rails app when packaged as a Docker image weighs in at over 1GB!

Excessively large Docker images create “drag” on a project. Builds take longer, deploys take longer, local development is slowed. When pulling a 1GB image on a slow or congested network, our engineers sometimes have time for a snack!

From an operational standpoint, the more CPU cycles and RAM your containers require, the less of them you can fit on any given Docker or Kubernetes node. This means more servers, which means higher operating costs and more monitoring and maintenance efforts. …

About

Alistair A. Israel

Tall, dark and bald. Nanny, cook, and occasionally programmer.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store