Photo by Greg Rakozy on Unsplash

Containerisation for iOS developers

How to run Vapor 4 server-side Swift code in any environment easily

Szabolcs Toth

--

Usually mobile developers have nothing to do with backend technologies (servers, virtualisation, networks) but sometimes it would be nice to understand some basic concepts especially if those can be beneficial for them. One of these concept is containerisation.

It can happen that you want to change a little thing in your containerised app, e.g. use environment variables for passwords, instead of hard coding them and push to them with the code to a cloud repository.

In this tutorial I will introduce the basic concepts. They are enough to make minimal changes without the help of backend engineers.

What is containerisation?

The official description is that the containerisation is a technique that allows developers to package and run applications in isolated environments, called containers. Containers are lightweight, portable, and consistent, which makes them ideal for deploying applications across different platforms and devices.

The easier way to understand is that containerisation is to create a (development) environment with the application and/or development tools once, record the settings in a “blueprint” and use this blueprint to deploy it on different machines or platforms.

Currently, there are two popular solutions, one is Docker and the other is Podman. The good news that both follows the Open Container Initiative (OCI) standards and Podman is compatible with Docker API, so if you know how to use one of them, you won’t have any problems with the other one.

Install Docker

It is quite simple, following the instructions on Docker manuals. You need to download a .dmg image and install it.

You can start Docker from Applications folder:

Install Podman

Podman can be downloaded from the Podman.io website. Alternatively, you can install Podman via Homebrew, although Podman documentation strongly discourages this method.

After installing, you need open Terminal and create and start your first Podman machine:

You can start Podman desktop from Applications folder:

Start your first container

GUI applications can help a lot in monitoring, but actually all the magic happen in the command line interface. I promise, the basics are not complicated at all, easy to remember.

Remembering the CLI commands help you to deploy your container in cloud, where you most probably will use some Linux distro.

Output:

Our first docker command has two parts:

  • the first part is the image defines what we want to run, including the version number. By default, it comes with the latest, I only put it to make more visible the difference between the image and the command
  • the second part is the command, specifies what we want to execute inside the container built from the image we’ve just pulled.

This way any machines can run Swift without installing it. You need only Docker or Podman.

If you run the command again, you will realise it will execute much faster as we don’t need to download again, the image. It is already on our machine.

List my local images

If you are interested in the images you already downloaded and locally available, execute the following:

Output:

You can list your images with the desktop apps as well:

As you can see the size of image is quite huge, so time to time you want to delete the unused images.

You can delete your images with the desktop app as well:

List my running containers

Often it is important to check whether our container is running and find information about it (e.g. used port number)

Output:

As you can see there are no running containers but we can list all containers including the stopped one using:

Output:

You can list your containers with the desktop apps as well:

So far we did very simple thing, pull image from the repository and execute a command inside the container.

Build my first container

It would be nice if we can run multiple commands or compile our Swift code without the headache to put all of it somehow into docker run.

Create a folder for your project:

Within the folder create a file called Hello.swift and copy the following:

Let’s use our swift:latest image to build a new container but this time, we would like to compile our Hello.swift file.

Don’t worry, this is not that complicated as it looks for the first sight. Anyway, this is the longest command we’ll type in this tutorial, I promise.

docker run: this how we start a container

-i -t: create interactive shell by keeping STDIN open and allocating a pseudo–TTY - never mind, we care only that these two letters are needed us for shell, where we can type

--rm: option create a “one-time” container, which will be delete, once we exit from it

-v ${PWD}:/usr/src/app: we need our MyContainer folder inside the new container as our Hello.swift is there. ${PWD} points to the folder from where we start the container. After the colon we define the /usr/src/app folder within the new container. So our current MyContainer will be called app.

swift:latest: the image we want to use

bash: will open a Bash session

Once you execute the command above you will find yourself within the container:

Type the following commands:

We have our Hello.swift file, so let’s do something most of the iOS developer hardly do during their career 😎.

Let’s compile from command line: swiftc Hello.swift

It will create a Hello executable, so run it:

It works as planned. Now, you can use exit to leave the container, it will be immediately deleted.

If you check the MyContainer folder you will see that the Hello binary is there, which we’ve just created. Obviously, you cannot run it, as it was compiled on Linux and you are now back to your macOS machine.

Create you first image

So far we use an image was created by someone else. It would be useful and moreover a fun as well, to create our own image, which can be shared.

First, we need to create a “blueprint”, which is called Dockerfile, where we define all the components container needs to be built. This is our plan, we can reuse and build as many containers as we want later.

Dockerfile supports quite a lot of instructions, you can check all of them here, if you are interested. We will focus on the ones you can see often and you can find in Vapor Dockerfile too.

Create a Dockerfile in the MyContainer folder and copy the following:

Let’s see what we have here:

  • FROM: define the image you want to use
  • COPY: copy our Hello.swift to /usr/src/app/
  • WORKDIR: set the previously used directory as working directory
  • RUN: execute build commands, here we use a JSON array, it is called Exec form in Docker terminology
  • ENTRYPOINT: define an executable

First, we define an image we want to use as base for our own image, then we copy our file in directory we can define. To make our life easier, we set this new directory as working directory, so we don’t need to use the full path. We, simply, compile our Hello.swift file, -o flag defines the output name. Finally, we execute our new binary file. Simple.

Build a container from my first image

We have the Dockerfile, the blueprint/plan, how can we build a container? Use docker build or podman build command followed by the path where your Dockerfile sits.

Output:

If you run docker images or podman images you will see that our newly built image is there:

Yes, it was us who have made that <none> image. We can do it better, but before that, let’s we can build a container from our first image. This time we can use the IMAGE ID.

Output:

It works!

You can create image from Dockerfile using Podman desktop app as well.

(I haven’t found the same option in Docker desktop app.)

Build a container from my first image, a little bit smarter way

As we could see, using the desktop apps, we can define a name to our image, so we don’t end up with <none>.

Use the -t flag to set a name and optionally tag, separated by a colon.

After running docker images or podman images you will see:

Now, you can use the image name to build the container:

Output will be the same as before.

Build multiple containers and make them work together

There are scenarios when you need to run multiple containers, which are able to work and communicate together. Typical scenario is when one container runs your app server, that connects to a database server, which runs in a different container. Sometimes, you need in-memory storage for cache as well.

Docker Compose handles all the heavy lifting work for you. You need to define what services you want to run, and Docker Compose is responsible to build the containers, set different variables and start/stop them, when you want.

Docker Compose uses a YAML file, called docker-compose.yml.

Create your docker-compose.yml in MyContainer folder.

Later, we will check more complex docker-compose.yml files, those will make more sense. As our application is simple, and there are no dependencies, the file is simple.

Note: Previously, all docker-compose.yml started with version ‘3’, which is not necessary anymore. You can read more about it here.

We define one service only, app. We tell Docker Compose that it can find the Dockerfile in the same directory where the docker-compose.yml.

From this one information using docker-compose up or podman-compose up, the container is built from our image and execute the command, we set in the Dockerfile.

Output:

In the app-1 line you can see the result of the execution.

Although, our Hello app, doesn’t use database, to demonstrate the concept we can add a new db service.

For db we don’t have Dockerfile because we are fine to use the standard Postgres image available in the remote repository. So we define the requested image with version number after image.

volumes allow you to map directories, as we did early in the docker run with -v flag.

environment: it tells Docker Compose to set the subsequent environment variables inside the container.

ports: we need to map the container’s port 5432 to port 5432 on our local machine. It is needed, because otherwise the container cannot communicate to the outside world.

Once you have more than one service, you can start/stop them all together using:

You can start/stop a service individually as well:

It can be useful if you make changes only on one service, but then you need to rebuild the service, you change.

Use all we learnt with Vapor 4

The main objective is the whole exercise to understand how Dockerfile and docker-compose.yml work created by Vapor for us. So, we can make simple modifications, when it is needed to deploy our Vapor app.

Create a simple Vapor app:

Enter the Docker_Tutorial folder and open the Dockerfile.

The first thing you notice that it is divided into two parts. Build image and Run image. You don’t need to worry about it too much, this way the back-end engineers decrease the size of the “final” run image as they don’t pack the compiler or other developer tools, only the compiled app is added.

As you can see in the Build image part, we already met all the instructions before.

Let’s see the Run image part. We haven’t met with:

  • ENV: Set environment variables
  • USER: Set user and group
  • EXPOSE: Describe the port used by your application
  • CMD: Specify a command to execute

Let’s run our app but this time as a containerised application:

Output:

Let’s check the images (docker images or podman images):

Let’s build a container from the image:

Output:

[ NOTICE ] Server starting on http://0.0.0.0:8080

We are not surprised, it works 🎉

If we want to stop the container, we can hit CTRL+C or open a new session(Terminal window) in the same folder:

Output:

We need only the CONTAINER ID:

You can stop your container in the desktop apps as well:

If we use -d flag with docker run or podman run, the container will run in the background, so we will get back the prompt immediately.

Challenge 1 — environment variable with Dockerfile

We need to sent environment variables in our Vapor 4 app using .env file.

Create a .env.production file and copy the following inside: PASSWORD=s3cret

Change the route.swift routes function:

After rebuilding the image and start the container from the new image, we expect to see something like this:

Let’s build and run:

What we get is different…

What happened? The .env.production file is in our folder. We know that the app starts with --env production parameter from the Dockerfile.

Reading the Docker documentation, we can find this note:

“For historical reasons, the pattern . is ignored.”

We need to tell Docker explicitly to use our .env.production file, using the --env-file <file> :

Challenge 2 — environment variable with docker-compose.yml

We need to sent environment variables in our Vapor 4 app using docker-compose.yml file.

As we can we in the docker-compose.yml we can set environment variables at the beginning of the file:

In case we would like to use the .env.production file, we need to add it to our docker-comopose.yml.

We need to (re)build the service.

Let’s start it.

Now, you have the basic understanding of how Docker and Docker Compose work. You can write a new or modify the default Dockerfile and docker-compose.yml as you need it. I am sure that your back-end engineer colleagues will appreciate your try and will help you in the further optimization.

By streamlining the development process, Docker containers offer a consistent and portable environment, ensuring seamless deployment and scaling of Vapor applications. With Docker’s versatility, developers can confidently build, test, and deploy their Vapor projects across various environments, accelerate the delivery of robust, scalable backend solutions for iOS apps.

--

--