How to Deploy a Ktor Server Using Docker

Jorge R.
The Startup
Published in
4 min readSep 18, 2020
Photo by Julius Silver from Pexels

We’ve seen already how to architecture a Ktor server application and how to test most important layers. Now it is time to see how we can approach server deployment using Docker.

Why Docker?

Docker is a technology (I think it is wider than a piece of software) that allows us to easily deploy applications. It is ranked #1 in “Most Loved” and #2 “Most Wanted” platform in the 2019 StackOverflow Survey, and makes surprisingly easy to run an instance of many popular applications (or any other you setup) using containers.

What is a container?

“A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.”

In other words, a container is an isolated piece of software you can pack up inside an image (container image, or Docker build output), ship that image to another computer and run it without any computing environment dependency.

Are we building a Docker image?

Yes! This is done using a Dockerfile file placed in our repository root folder and it will define how we are creating our Ktor Docker image. This file will include all needed commands to build our image, normally starting with a FROM command and optionally ending with CMD. In our case, I will only focus on the needed ones to build our Ktor image. You can see them in the following snippet:

Building phase: as we are going to run a Java application, we start from openjdk container to build our sources running gradlew build command.

Container setup: once we have our app built, we can place it inside our container. We start from openjdk-jre image, as we are no longer building Java, just running it. We create a new user and declare it in the system, giving it permissions where needed and setting it to be used when running our container. It is just left copying our project files and defining container entrypoint (this point is specified in Ktor documentation). As you can see in line 29 from previous snippet, Docker container will contain a .jar file that we still cannot generate, so we need to configure how to pack our Java application inside this jar file.

Packaging up our application

To deploy a Java application, we need to package it up. We can automate this process using Gradle and Shadow plugin. In the following snippet you can see how to install this plugin in your build.gradle file:

Note: new Ktor 1.4.0 documentation differs a bit how to do this step. If you find this setup is not working anymore, please review this documentation.

Once we have added Shadow, this jar file will be automatically generated inside build folder when building our app.

How to launch Ktor server

Now that we have our Dockerfile file in our repository's root folder, we can checkout our code where we want to deploy our server and build our backend image (Docker should be available there). To do this, we can use the following command:

docker build -f Dockerfile --no-cache -t ktoreasy .

Note: -f Dockerfile & --no-cache parameters can be omitted, but I prefer to use them to avoid problems while running this example.

Once this task ends, Docker will now have available ktoreasy container image and we can run it with the following command:

docker run --publish 3600:3500 --detach --name ktoreasy ktoreasy:latest

Hurraaayyyyy!!! 👏 👏 👏 👏 👏

Buuut… it is not working… 🤔

Don't panic, it is expected. You still need to connect to the database, and with the default application configuration environment (dev), it is expecting to find it inside Docker with "db" container name, as you can see in the following snippet:

To run this example, you need to have a MySQL database called "ktoreasydb" running in databaseHost:databasePort. This is an interesting exercise to demonstrate how Docker works, as we have to change application configuration, that is already defined inside Docker image. To do this, we have to change our code (application.conf file) and make it point to our database, build Docker image as before and run it again.

Yeah, I'm also too lazy for doing that more than once...

Mmmm this is not as good as expected…

You are right. Why is Docker such an amazing tool if we need to do this process? Actually, this way of working is not used when you have dependencies, like database instances. Defining a Docker Compose file will help to define and run all dependencies together. This file (docker-compose.yml) is already included in the Github repository:

Now you can just run in your project root folder:

docker-compose up -d

Note: ports 3308 & 3510 should be available in the machine that will run this code.

If you check your list of containers in Docker, you will see KtorEasy group with backend and database instances inside. Easy and elegant!

If you want to give it a try, clone the following Github repository and don’t be shy to leave a comment or a PR if you feel you can add something. Both are welcome!

Have fun and happy “clean” coding!

PS: You can claim this is not a real world example, but maybe in a future I can show how to approach microservices with on demand escalation using Kubernetes…

Update: I will update Ktor dependencies and I will also merge extra features into master, so some of the explanations in this article may refer to this state of the code:

https://github.com/mathias21/KtorEasy/tree/simpleCompose

--

--