Building and running a Node.JS, TypeScript, PostgreSQL app with Docker

Haris Zujo
NSoft
Published in
9 min readSep 1, 2020

We see more and more companies using JS as their tech stack for both backend and frontend. It makes sense because the development cycle reduces in time, cost, and increases in efficiency, and gives a unique stack for different parts of the system.

Today I’ll show you how you to build a small web app using Node easily. JS enhanced with TypeScript, PostgreSQL as our database, and how to dockerize it.

This article starts with the basic setup of our Node.JS app, Express server, and PostgreSQL configuration. The Docker part is described at the end of the story, so if you want to see the Docker setup for this app only, scroll to the bottom.

Creating a Node.JS application

I assume you have installed Node.JS before. Creating every Node.JS application starts with a simple command npm init If you add -y , it will fill all input for you and create a package.json file necessary for managing our application dependencies required for it to run.

Since we said we’re going to use TypeScript, we have to set up our project to work with TypeScript right away.

Next, we’ll install TypeScript and tslint in our project by typing the following:

npm install typescript tslint --save-dev

This command will install TypeScript in our dev dependencies. After we have installed TypeScript, we’ll edit our package.json file and add tsc command for accessing TypeScript commands. We’ll be using this command for starting and bundling our application.

After we’ve installed these packages, we’ll run the following command to initialize our tsconfig.json file where compiler options for our project are stored.

tsc --init

Since we’ll be using Express, it’s important to install a package that helps TypeScript identify express types by typing:

npm i @types/express @types/node --save-dev

This way, TypeScript will be able to recognize Express classes and global Node types. For example, after you install the types package, you’ll be able to import Request and Response types from Express directly.

If we initialize tsconfig.json with the first command, we’ll get a file like this:

This is our tsconfig.json file with plenty of compiler options you can customize.

We’ll replace it with the following:

{
"compilerOptions": {
"module": "commonjs",
"esModuleInterop": true,
"target": "es6",
"moduleResolution": "node",
"sourceMap": true,
"outDir": "dist/src"
},
"lib": ["es2015"]
}

We are mostly interested in our outDir property where we specify the output directory for our transpiled TS code into JS.

Creating and starting our server using TypeScript

It is always a good idea to separate your scripts for defining your server and start your application using that server configuration.

For now, our project structure looks like this:

I’ve created dbconfig directory with database configuration that we’ll use when initializing thePostgres connection.

In the server.ts file lies our server configuration looking like this:

We’ll explain the configuration as we go. Firstly, we initialize our Express application, move to server specific configurations such as configuring body-parser to handle our incoming data. In the end, there is a public method we’ll use to start our server in our app.ts file. We also set up our routing to use separated router configuration for getting all todos.

Here we call our “start” method and log in case of success or failure.

Postgres configuration

Pooling is a process of creating connections and caching them to reuse them. There is no resource waste as it would be if you create a new connection every time.

Below is the explanation of the mentioned configuration properties :

I am using pgAdmin4 for accessing my database cluster and managing my databases.

TodosController

We’ll create a TodosController and route for fetching all todos from the database.

A controller would look like this:

  • Importing pool from our database configuration class
  • Initializing new connection and creating an instance of pg client
  • Sending our raw query to our client’s query method — async
  • Releasing our connection —THIS IS IMPORTANT!
  • Returning fetched data

Always release your connection when using the pool. That way, the client will return to the pool of available connections.

Todos router

We’ll set up our router in a separate file stored in src/routers and give it a route to the GET method defined upper in our TodosController.

Now, all we have to do is register this router in our server.ts where we set up our server.

We import our todos router and we say to our Express server that anytime someone hits “/todos” in the URL, pass the instance todosRouter to handle all the requests with that route.

Project structure

So far, we have the following project’s structure:

Building our application

Now that we have all set up, we can test our endpoint for fetching all todos from the database.

First, we’ll run the following command from our terminal

npm run build

Using this command, we will look up our tsconfig.json and see the output dir we specified so that it knows the output directory for our transpiled files.

Make sure your main in the package.json points to the same directory as the output dir you specified for the TypeScript since it will be our main point of entry for our application.

"main": "dist/src/app.js",

Our scripts section in package.json should be like this:

"scripts": {"build": "tsc","start": "tsc && node dist/app.js","test": "echo \"Error: no test specified\" && exit 1"},

Now, if you check your directories tree, you’ll see a new directory dist just as we specified in the tsconfig.json.

Testing our application

To start your application type and check your terminal:

npm start

We have it up and running, and since we previously set up our Express server, database configuration, and router, let’s test our endpoint for todos.

Now, if we go to the browser and visit:

http://localhost:4000/todos

we should get a result from our database

Such a piece of good advice!

Dockerizing our application

Since containers are so popular and almost a must when you’re building, testing, and deploying your applications, we’re gonna make our application run inside a Docker container.

To achieve that, we need to build an image from our application. Images are basically our packed applications containing everything for the application to run.

Containers are running instances of our images.

You can read more about containers here.

Docker compose

We’ll create a file called docker-compose.yml where we can setup configuration for all of the services our application will be using.

We need a service for our web application running in Node.JS and our database service, which is PostgreSQL.

We use docker-compose to run multiple containers for our application.

Our docker-compose file would look like this:

If you take a look at our Postgres container in our docker-compose configuration, you can see that we are using a 10.4 Postgres image to build our container, expose the “5432” port on our local machine and map it to the container’s 5432 port. Meaning, if I want to access my Postgres instance running inside my container, I would use localhost:5432 port along with the defined username and password.

We also run our pgAdmin on port 80, which is our database management service where we can access our Postgres databases, create clusters, and do all kinds of operations.

Initializing our database schema on the container startup

It would be great to initialize the database with schema, so we don’t have to manually create it.

- ./src/migrations/dbinit.sql:/docker-entrypoint-initdb.d/dbinit.sql

This piece of code in our docker-compose file will trigger our dbinit.sql file from our project and use it to execute whatever SQL we wrote inside of it.

Dockerfile

Our Dockerfile would look like this:

What this will do is create an image of our application with our virtual directories structure to hold everything that we already have on our machine.

We list which version of Node do we want for our image created our working directory, then copied everything from our src folder to our virtual one.

Here we used -alpine image. Alpine comes from Linux and represents a minified version of Linux that has enough resources to run your applications. It is beneficial when creating Docker images because the output size will be a lot smaller.

The container might be ready but our database might not!!!

What is that part with that URL? It looks like we’re also downloading and running something when building our image, some kind of a waiting process.

Yes, your container may be ready for communication, but the process you’re running inside of it may not. In this case, it’s our Postgres database. It takes a bit more time for it to start than our application. Hence, we are going to download a shell script that will delay our database connection call until it is ready to accept connections.

Our final project structure

Running our application services with Docker

Now when we have our Docker configurations all ready, we can set them up and running in a matter of seconds.
We can achieve this by typing the following command in the root of our project where our Docker configuration files are:

docker-compose up --build -d

The -d says to run the container in the background, and the — build builds the image before starting containers. It is important because you often want to have access to the command prompt while the container is running.

Now, we should have our app image built and containers up and running. To check your running containers, execute the following:

docker ps

We see that our containers are up and running, and we can see on which port each and every one of them has been exposed and mapped.

It would be nice to see what is going on inside our containers, some kind of logs or something.

To see the output of your containers during their execution time, just execute:

docker logs {container}

This is the output of our Node app — the same output as if we’d execute npm start. It’s the output of our application starting, but now running in the container.

PgAdmin

We also see that our db management is running on port 80. If we go to “localhost:8080”, we’ll get our database management login page where we can log into our system.

We’ll use our credentials from the pgadmin container’s environment variables, and now we have access to our database server.

You see that we already have our todo-db, and the todos table created.

Testing our /todos route

Okay, we have our database services and application up and running inside docker containers, so now we can test our todos endpoint.

I’ll hit a simple CURL request towards our TodosController with the JSON formatting pipe.

curl 'http://localhost:4000/todos' | json_pp

And there we have our data.

Conclusion

The point of using Docker for your development is that you can put together a bunch of cool services, set up a testing environment, work with different versions of tools and see how your app behaves and a lot of other stuff. Just edit your docker-compose file, put some new services, and play around.

Thank you for reading. I hope you learned something useful!

GitHub repo: https://github.com/CyberZujo/todo-app

--

--

Haris Zujo
NSoft
Writer for

Software Developer with a focus on server-side development using Java frameworks, and cloud technologies.