Building and running a Node.JS, TypeScript, PostgreSQL app with Docker

Haris Zujo
Sep 1, 2020 · 9 min read

We see more and more companies using JS as their tech stack for both backend and frontend. It makes sense because the development cycle reduces in time, cost, and increases in efficiency, and gives a unique stack for different parts of the system.

Today I’ll show you how you to build a small web app using Node easily. JS enhanced with TypeScript, PostgreSQL as our database, and how to dockerize it.

This article starts with the basic setup of our Node.JS app, Express server, and PostgreSQL configuration. The Docker part is described at the end of the story, so if you want to see the Docker setup for this app only, scroll to the bottom.

Creating a Node.JS application

Since we said we’re going to use TypeScript, we have to set up our project to work with TypeScript right away.

Next, we’ll install TypeScript and tslint in our project by typing the following:

npm install typescript tslint --save-dev

This command will install TypeScript in our dev dependencies. After we have installed TypeScript, we’ll edit our package.json file and add tsc command for accessing TypeScript commands. We’ll be using this command for starting and bundling our application.

After we’ve installed these packages, we’ll run the following command to initialize our tsconfig.json file where compiler options for our project are stored.

tsc --init

Since we’ll be using Express, it’s important to install a package that helps TypeScript identify express types by typing:

npm i @types/express @types/node --save-dev

This way, TypeScript will be able to recognize Express classes and global Node types. For example, after you install the types package, you’ll be able to import Request and Response types from Express directly.

If we initialize tsconfig.json with the first command, we’ll get a file like this:

This is our tsconfig.json file with plenty of compiler options you can customize.

We’ll replace it with the following:

"compilerOptions": {
"module": "commonjs",
"esModuleInterop": true,
"target": "es6",
"moduleResolution": "node",
"sourceMap": true,
"outDir": "dist/src"
"lib": ["es2015"]

We are mostly interested in our outDir property where we specify the output directory for our transpiled TS code into JS.

Creating and starting our server using TypeScript

For now, our project structure looks like this:

I’ve created dbconfig directory with database configuration that we’ll use when initializing thePostgres connection.

In the server.ts file lies our server configuration looking like this:

We’ll explain the configuration as we go. Firstly, we initialize our Express application, move to server specific configurations such as configuring body-parser to handle our incoming data. In the end, there is a public method we’ll use to start our server in our app.ts file. We also set up our routing to use separated router configuration for getting all todos.

Here we call our “start” method and log in case of success or failure.

Postgres configuration

Pooling is a process of creating connections and caching them to reuse them. There is no resource waste as it would be if you create a new connection every time.

Below is the explanation of the mentioned configuration properties :

I am using pgAdmin4 for accessing my database cluster and managing my databases.


A controller would look like this:

  • Importing pool from our database configuration class
  • Initializing new connection and creating an instance of pg client
  • Sending our raw query to our client’s query method — async
  • Releasing our connection —THIS IS IMPORTANT!
  • Returning fetched data

Always release your connection when using the pool. That way, the client will return to the pool of available connections.

Todos router

Now, all we have to do is register this router in our server.ts where we set up our server.

We import our todos router and we say to our Express server that anytime someone hits “/todos” in the URL, pass the instance todosRouter to handle all the requests with that route.

Project structure

Building our application

First, we’ll run the following command from our terminal

npm run build

Using this command, we will look up our tsconfig.json and see the output dir we specified so that it knows the output directory for our transpiled files.

Make sure your main in the package.json points to the same directory as the output dir you specified for the TypeScript since it will be our main point of entry for our application.

"main": "dist/src/app.js",

Our scripts section in package.json should be like this:

"scripts": {"build": "tsc","start": "tsc && node dist/app.js","test": "echo \"Error: no test specified\" && exit 1"},

Now, if you check your directories tree, you’ll see a new directory dist just as we specified in the tsconfig.json.

Testing our application

npm start

We have it up and running, and since we previously set up our Express server, database configuration, and router, let’s test our endpoint for todos.

Now, if we go to the browser and visit:


we should get a result from our database

Such a piece of good advice!

Dockerizing our application

To achieve that, we need to build an image from our application. Images are basically our packed applications containing everything for the application to run.

Containers are running instances of our images.

You can read more about containers here.

Docker compose

We need a service for our web application running in Node.JS and our database service, which is PostgreSQL.

We use docker-compose to run multiple containers for our application.

Our docker-compose file would look like this:

If you take a look at our Postgres container in our docker-compose configuration, you can see that we are using a 10.4 Postgres image to build our container, expose the “5432” port on our local machine and map it to the container’s 5432 port. Meaning, if I want to access my Postgres instance running inside my container, I would use localhost:5432 port along with the defined username and password.

We also run our pgAdmin on port 80, which is our database management service where we can access our Postgres databases, create clusters, and do all kinds of operations.

Initializing our database schema on the container startup

- ./src/migrations/dbinit.sql:/docker-entrypoint-initdb.d/dbinit.sql

This piece of code in our docker-compose file will trigger our dbinit.sql file from our project and use it to execute whatever SQL we wrote inside of it.


What this will do is create an image of our application with our virtual directories structure to hold everything that we already have on our machine.

We list which version of Node do we want for our image created our working directory, then copied everything from our src folder to our virtual one.

Here we used -alpine image. Alpine comes from Linux and represents a minified version of Linux that has enough resources to run your applications. It is beneficial when creating Docker images because the output size will be a lot smaller.

The container might be ready but our database might not!!!

What is that part with that URL? It looks like we’re also downloading and running something when building our image, some kind of a waiting process.

Yes, your container may be ready for communication, but the process you’re running inside of it may not. In this case, it’s our Postgres database. It takes a bit more time for it to start than our application. Hence, we are going to download a shell script that will delay our database connection call until it is ready to accept connections.

Our final project structure

Running our application services with Docker

docker-compose up --build -d

The -d says to run the container in the background, and the — build builds the image before starting containers. It is important because you often want to have access to the command prompt while the container is running.

Now, we should have our app image built and containers up and running. To check your running containers, execute the following:

docker ps

We see that our containers are up and running, and we can see on which port each and every one of them has been exposed and mapped.

It would be nice to see what is going on inside our containers, some kind of logs or something.

To see the output of your containers during their execution time, just execute:

docker logs {container}

This is the output of our Node app — the same output as if we’d execute npm start. It’s the output of our application starting, but now running in the container.


We’ll use our credentials from the pgadmin container’s environment variables, and now we have access to our database server.

You see that we already have our todo-db, and the todos table created.

Testing our /todos route

I’ll hit a simple CURL request towards our TodosController with the JSON formatting pipe.

curl 'http://localhost:4000/todos' | json_pp

And there we have our data.


Thank you for reading. I hope you learned something useful!

GitHub repo:


Engaging stories about software development, design and tech.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store