The Definite Express.js Stack in 2020

Markus Hanslik
Sep 16, 2020 · 18 min read
Image for post
Image for post
© Markus Hanslik

Here are the most important parts of every production-ready Express stack — regardless of whether you already have an app in production, or are starting up, follow these pointers to make sure your app will be able to scale team-wise and user-wise.

A note for people picking Express for the first time

However, if you are planning on developing a straightforward REST API or GraphQL endpoints, and are okay with learning a new library to get a lot of help out-of-the-box, you should at least spend a few hours before picking Express and look into higher-level alternatives for your use case, such as Feathers, Loopback, HAPI, NestJS, Sails.js, or others.

For many use cases, they provide a lot of tools out-of-the-box and reduce the boilerplate you will end up creating with Express, and at least the popular libraries often are simpler to maintain and update than individual packages stitched together.

If, however, you need full control of your code, have use cases not covered by other tools such as multi-tenancy, want to understand in great detail what is going on, or require all parts to be replaceable — Express is still the number one tool for the job.

1 | Git, SemVer and keeping meta-information

You can easily initialize and host your Git repository on services like GitHub, GitLab, Bitbucket and more — you will find that most services differ in their UI, but not such much in their features. The most known service is GitHub, and most packages you will be using in your Express project are hosted there; so to get acquainted with it, we pick GitHub. (It also recently added Github Actions, which is their CI offering, and hosting recently became free even for private repositories).

No matter which project you build, you should always add a README.md to the root of your project. These markdown files are always shown as the front page of your repository and are a great way to note what you are building, for whom, and the basic principles and guidelines if you have any.

Additionally, even for private repositories, ideally you should add a CHANGELOG.md file. Here, you will keep track of at least every production release or big milestone of your app, so that it’s easier to check back in the future and see where issues may come from, as well as get a better overview of what has happened. You can find best practices on how to handle your change log at https://keepachangelog.com/en/1.0.0/.

Milestones of your code should be versioned using Semantic Versioning (https://semver.org), the most common versioning scheme which you will also find in most other packages.

It’s not only helpful for users of your app to understand when you introduced a breaking change, but it will also help yourself in case you need to roll back something. Just look at which number has changed and you’ll immediately know if there was a breaking change, a new feature, or merely a patch.

To actually be able to roll back or compare versions, every version should become a git tag (e.g. v2.5.3).

This initial setup is a good start to have at least some limited but time-efficient documentation. As we are following quasi-standards, it’s easy to continue from here once your app grows, and improve and automate the process by using tools such as https://github.com/semantic-release/semantic-release, or publishing your change log as release notes for your customers.

Once your application is working production, you may want to not always commit to your main branch so that you are always able to fix the production system’s code without needing to shift around half-ready feature code. You can use the popular GitHub Flow or the older GitFlow branching workflow to have an easy-to-follow guideline on when to create a branch, and using which name.

2 | Environment-to-go: Docker

It will also make it simpler to add a database or other services you may need in the future, as most services like MySQL, ElasticSearch and others provide ready-to-run containers that do not require extensive setup.

Some Node.js packages will also require unix dependencies to be installed, such as ImageMagick for image manipulation. It’s much easier to have an environment in which you can install those things in a reproducible way rather than setting things up from scratch every time, and then running into a bug because your production version differed slightly from your development machine.

Using containers will also help deploying your application in production, should you choose so. All of the main cloud providers nowadays offer schedulers (e.g. AWS Fargate, or Kubernetes, …) which take your Docker image, start it, scale it if necessary, and re-start if something exits; meaning you do not have to worry about process managers like PM2.

A Dockerfile for your Express repository could look like that:

FROM node:12.18.3-alpineUSER nodeCOPY package.json .RUN npm installCOPY . .EXPOSE 80CMD ["node", "server.js"]

This is all you need for the start. Worth mentioning is to always set the specific version of Node you are using, so that you can be sure of the language features you have at your disposal. Otherwise, if you are building your container on your machine, and then later somewhere else, you may have an older version cached on your machine and get a different version on another machine.

Node.js suggests only using long-term-support (LTS) versions for production apps, which at the time of writing is 12.18.3. You can see which version is the current LTS by looking at nodejs.org.

Additionally, try to make sure you are not running things as root in your Docker containers, to reduce attack vectors.

For more sophisticated but smaller images, you can also use multi-layer Docker images and separate build and app containers; and you can leverage Docker’s caching feature by ensuring that least-changed files are copied into your image first (e.g. package.json).

3 | Package Manager: NPM

Whilst it looked like Yarn would be replacing NPM as the standard package manager for a while, nowadays NPM’s performance and CLI caught up. For instance, Yarn used lock files to ensure that you are always getting the same hash of your dependencies when installing them, which is now also done by default by NPM.

The main differences for most use cases is that whilst Yarn 2 commits a minified version of your dependencies to your repository, and offers a built-in CLI UI for checking and updating outdated packages, NPM does not out-of-the-box.

However, to keep things simple, we recommend to just use what comes with every Node.js installation and stick to NPM, and only switch to Yarn if you really have a reason to do so.

No matter which package manager you pick; always make sure you are using fixed versions by commiting the lockfile the tool generates for you. That way, whenever you download your app’s dependencies, you can be sure you are downloading the exact same version of those, making reproducing and avoiding errors much easier.

4 | Visual Studio Code as your IDE

VS Code is a free IDE from Microsoft that has not much to do with Visual Studio, but has been built from the ground up using web tools such as Electron with TypeScript, and is currently available for most operating systems.

It’s the most popular IDE according to StackOverflow surveys, and this is not by chance: it is fast, highly customizable, supports tons of plugins, and is very easy to extend with your own plugins.

VS Code’s settings can be stored per and in a repository, meaning its also great in case you have different projects with different languages, plugin requirements, or other settings. You can even store which VS Code plugins a repository needs so that when you switch computers, you can set up your IDE easily.

5 | Code formatting standard: Prettier

Luckily, nowadays Prettier gained a lot of traction. It has an (albeit opinionated) configuration that can be enforced completely automatically and should be your coding standard from now on.

It works with most files (e.g. JavaScript, TypeScript, HTML; with React, YML, …) and has integrations for most IDEs.

Even if you do not like the style at first, I strongly recommend to just get used to it — it’s much easier to adapt and then have a common style rather than to fiddle with the configuration only to then have others needing to adapt to your style, and you having to read the prettier style in most other packages anyhow.

Also, it’s much simpler to just rely on the software doing the formatting for you and focusing on coding, rather than spending your time on thinking about how many spaces you prefer.

Once added to your repository, you can run it against your code in the current directory using:

npx prettier --write .

You can also check if your files in „src“ are matching the coding standards by running:

npx prettier -c "src/**/*.*"

The former command is especially useful if you have a CI pipeline; this way you make sure that your code is always following the formatting standard. Should you not use a CI, you could at least add the command as a pre-commit hook using Husky.

6 | TypeScript

It’s possible to use both JavaScript and TypeScript side-by-side in a project, so there’s no hard cut necessary; but in the spirit of making our lives easier, having types — even if you are not using them everywhere — will save us a lot of debugging, documenting, and potentially even some unit tests.

You should consider adding TypeScript via Babel to your project. This way, building your project is faster than with plain TypeScript, and you can use Babel for further things down the line, such as importing files that are not supported by Node.js (e.g., GraphQL schemas).

7 | Linting to avoid common pitfalls: ESLint

Ideally, try to not deactivate the recommended rules unless you have a good reason. Especially if you have legacy code, its often better to set ESLint to treat issues as „warnings“ instead of turning it off completely; that way, you can improve your legacy code over time instead of ignoring quality issues.

8 | Jest for unit tests

Additionally, you could install jest-html-reporter should you use a CI and want a detailed output artifact.

Note that aside from unit tests, you may want to look into contract testing using Pact JS as well — this will help consumers of your API to ensure that their code is still working against your API.

9 | Cloud-ready logging: Pino

In Express, normally you will end up having logs from your application itself, as well as HTTP logs from accessing your Express routes.

Traditionally, HTTP logs have been using the Apache log format, with apps logging without a specific format; this will make it hard for you to programmatically use your logs down the line.

To prepare for log aggregation in the future, to be able to use services such as AWS CloudWatch Insights, and to make it possible to structure your log output, you should aim to use JSON as your default log format.

This also allows to extend your logging in the future with additional meta data such as transaction IDs, your app’s version, your service’s name and so on.

The fastest logger which also happens to log using JSON by default is Pino, which also comes with a pino-http package to log Express HTTP requests without needing a lot of setup.

10 | Express.js HTTP header/helper middleware

If you are not using a proxy and your app and frontend(s) aren’t sharing the same domain, you will run into cross-origin resource sharing issues: Modern browsers will not let your frontends access your Express app if the latter does not send the correct CORS headers allowing the browser to do so. You can fix and configure this by installing the cors package.

There are also other headers you may want to set, like cache headers; you can easily configure these HTTP headers by using helmet.

Most Express apps get parameters from requests, often the request’s body; to have access to the request body without needing to parse it manually, install body-parser which does it for you.

By the way, try to keep the body limit of body-parser as low as possible. Malformed requests of a huge size can block your Node.js instance easily, so the lower the limit size is, the better.

Worth mentioning, but hopefully not required for your setup is GZip compression. If you really cannot do compression via your proxy, or your loadbalancer, or your CDN, you can install the compression package for Express. However, be aware that this may impact your server’s performance.

Finally, you should not shout out the Express version you are using to the world, as this is a potential security risk. You can disable the header easily by calling the following method somewhere where you configure your Express app:

app.disable("x-powered-by");

Note that if you decide to use helmet, you don’t need to do this manually as helmet automatically does it for you.

11 | API endpoint format: GraphQL or OpenAPI

Today, APIs are most commonly described either using OpenAPI 3 (formerly known as Swagger) in case you are building a REST API, or use GraphQL endpoints.

For RESTful routes, Express has everything out-of-the-box already, though you may want to install useful helpers such as swagger-jsdoc. This enables you to store your API definition next to your routes (so that you can design and change both in one place if there are changes) and to have a JSON file you can browse using an OpenAPI explorer.

If you are comfortable with a more ‚batteries-included‘ approach, there are also frameworks like express-openapi, which not only provide generating OpenAPI specs out of your code, but also add validation and much more, as long as you are using their interfaces for your services.

For GraphQL, you can use one of the GraphQL middlewares like express-graphql and generate your TypeScript definitions out of GraphQL by using @graphql-codegen/cli; or you can install the more opinionated alternatives like apollo-server-express, depending on how you develop your clients.

12 | Request validation using standards

One way to do so is to use the highly popular package Ajv to do the validation for you.

You simply write your validation logic using the JSON schema format. It is easy to learn and can be used in frontends as well as backends, as it is very lightweight — and then Ajv can do the validation and error messages for you.

Even custom validators can easily be defined; and recently, Ajv became part of the OpenJS Foundation.

Ajv can easily be leveraged for your routes by using the middleware express-json-validator-middleware.

(If you want to share your validation logic across packages, you may want to check out setting up a mono-repository, or use Visual Studio Code’s workspace feature to not accidentally forget to commit changes).

13 | Error reporting for better visibility

Logging is not enough for this purpose, as you will not get a stack trace, but there are tools available that aggregate errors and enrich them with further information, without needing a big setup other than adding yet another Express middleware.

Most of them are paid services, like Bugsnag, Sentry, or Raygun; however, some of them offer free accounts for starters (like Bugsnag) or their source code for self-hosting (like Sentry).

14 | Ping and health check endpoints

router.get("/ping", () => { return "OK"; });

This route will come in handy to have your load balancers or external services check if your service is still up.

The ping route should be very light-weight, as it will likely be called very often, but you may end up adding further (public) information such as the version number or change the status to 500 if your database is not alive anymore, depending on your use-case.

Additionally, you could implement a more sophisticated health check route like @godaddy/terminus, the quasi standard if your health check needs to return more information than just status code 200.

15 | Continuous Integration

Which provider you pick may be depending on where you host your repository (e.g. if you use GitHub for everything, GitHub Actions may make sense), or where you end up deploying your application (e.g. if you host in AWS, AWS CodePipeline’s integrations may make your life easier and be cheaper as it can also provide blue/green deployments and other AWS features).

They all are very similar; and nowadays support not only their proprietary pipeline workers, but also support just running Docker containers. This means most of the time you can be sure if your code builds locally, it will also build with the exact same container in your CI.

Your pipeline should at least run the npm audit command, so that you do not forget to check your dependencies for security issues; the npx prettier -c 'src/**/*.*' command, so that you do not accidentally commit code that is not formatted correctly; and also run the ESLint command, so that you keep track of potential quality issues.

Depending on what you are building, you could also use license-checker to ensure nobody can use a package that has a ‘wrong’ license. This is especially helpful if you are working on proprietary software, which means you may not want GPL packages in your code.

You could run multiple commands in one pipeline step and catch the error code like so:

$ RESULT=0
$ ncu --errorLevel 2 || RESULT=$((RESULT + $?)) || true
$ npm audit || RESULT=$((RESULT + $?)) || true
$ license-checker --summary --failOn BSD || RESULT=$((RESULT + $?)) || true
$ if [ $RESULT -gt 0 ]; then exit 1; fi

The CI should also build the Docker image and tag the image (e.g. using repository tag/branch + build number), so that you can deploy it; and depending on your mileage, you may also want to add checks to see if your packages are out of date, if you’ve added a change log for new tags, and more.

16 | Automatic dependency updating

Adding a service like Greenkeeper (now part of Snyk) or Renovate to your app greatly helps in reducing the chores needed to maintain your app. They update exactly one dependency at a time, in a separate branch, so that your tests can run and so that you can merge updates one-by-one, reducing the room for errors and making it easier to roll things back in case issues arise.

Even if you decide to not add a service like that, you should consider adding a CI step that outputs which packages of your service are outdated, so that you can manually update them. Simply install npm-check-updates via NPM to at least cover your JavaScript dependencies.

17 | Rate limiting to prevent performance issues

This can be done by using one of the rate limiting packages, such as express-rate-limit. They ensure you cannot accidentally or purposefully hit your API with many requests at the same time, which would block your Node.js event loop and result in unavailability of your service.

18 | Static file hosting outside of Express

Instead of trying to work around that by hosting many Node.js instances in parallel, you can fix this much more elegantly (and much more cheaply) by hosting your static files on services like AWS S3.

An S3 bucket which your frontends can access is easily set up; you can get a free AWS account, add a bucket into it, enable static website hosting, and you are done. Over time, you can then add a CloudFront CDN for faster performance, logging, versioning and whatever you may need.

19 | Security checks

Tools such as the Burp Suite can help to further stress-test your application against common security issues.

Ensuring your application is secure is a complex topic that warrants another article, but for completeness’ sake it should be noted that when thinking about security, keep in mind your software is more than just your own code.

You should also check your infrastructure (e.g. SSL Labs can tell you if you are using outdated SSL protocols), as well as your software dependencies — not only NPM packages via the aforementioned npm audit, but also your Docker images using tools such as Clair or Snyk.

20 | Hosting with Docker images

To make your life as easy as possible, you should always aim to have a stateless service. This means to keep all state (your service’s data, user state information, …) outside of your application in a database or Redis, so that if your application gets shut down and started up again, no data loss occurs.

You can deploy your application easily by using services such as the already mentioned AWS Fargate and other high-level solutions, or the more low-level Kubernetes stacks available on all cloud platforms.

Doing so not only adds the benefit of using a easily transportable Docker image which you can also deploy locally; but it also means that your scheduler will be able to take care of restarting your application, scaling it up or down, and more.

Most of the cloud providers also come with tools like AWS Secrets Manager for handling your applications secrets (such as tokens, your database password, etc.), and can be set up using code (e.g. via Terraform) so that your infrastructure is also reproducible, and versioned, among other benefits.

If you already use AWS for your stack, you should look into Fargate and CodePipelines. Fargate offers automatic connection draining when you deploy a new image, so that you can avoid downtime; and CodePipelines can leverage advanced blue/green deployment features.

An Idea (by Ingenious Piece)

Everything Begins With An Idea

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store