Using nothing but Docker for projects

Raphael Amoedo
Devjam
Published in
6 min readApr 6, 2022
Image with many programming languages in it
https://www.pngfind.com/mpng/iomTxhb_code-frameworks-docker-programmer-png-transparent-png/

Imagine the following situation: you start working on a new project, maybe also with a different programming language you’re not used to. So, the project is there and you should be able to run it.

You hope there’s any documentation telling you what to do — which is not that common — and if/when there’s any, it usually doesn’t work. You need to know what to install, where to install, how to setup and so on. That’s not an uncommon scenario and you can actually expect that at some point. But what if there was a way to make sure this won’t happen again?

Throughout this post we’ll see different approaches we could use to make this easier using only Docker.

Level One: Using alias with Docker

Example with Java + Maven:

Let's consider a Java project for example. Usually to run a Java application you run java -jar application.jar .

In order to generate the jar file and manage project dependencies, you can use a lot of different tools, with the most known being Maven and Gradle. Let's consider Maven for this example. Let's see some Maven commands:

  • mvn dependency:copy-dependencies — Downloads the dependencies if they’re not downloaded yet.
  • mvn package — Builds the application and generates the jar. It also downloads the dependencies if they're not downloaded yet. If you want to skip running the tests in the building process you can also pass the following parameter:
    -Dmaven.test.skip=true

Assuming we need Maven 3 and Java 11, that's how you could use Docker:

alias java='docker run -v "$PWD":/home -w /home openjdk:11-jre-slim java'alias mvn='docker run -it --rm --name maven -v "$(pwd)":/usr/src/mymaven -w /usr/src/mymaven maven:3-jdk-11-slim mvn'

This way, you can run any Maven and Java commands without having to install Java or Maven. You can test the commands by running java -version or mvn -version. Usually, the official Docker image of these tools gives you instructions on how to run it and you can just create an alias for that.

Pros:

  • If you don't need to use it anymore, you can just remove the related Docker image.
  • Easy to change the version.

Cons:

  • In this example, you still need to find out what is the Java version used, what is the tool used (in this case Maven) and its version.
  • If you're dealing with a programming language you don't know, it will take even more time to understand what to do.
  • You still need to know which commands to run.

It's a fair approach, especially if you know what you're doing. But that doesn't come with the project itself. So, let's try to improve it a bit.

Level Two: Using Docker for running the application

That's where Dockerfile starts to shine. We know how to run commands using only Docker, but how to run the application?

A common Dockerfile for that situation could be:

FROM openjdk:11-jre-slimARG JAR_FILE=target/*.jarADD ${JAR_FILE} app.jarENTRYPOINT ["java", "-jar", "/app.jar"]

And you can build it as you normally build a docker image, for example:

docker build -t my-application .

You can see that it depends on an existing JAR file to build it. As you saw on the previous topic, we know how to generate it, but if it was another programming language or tool, we would be in trouble.

It seems like a really minor improvement, but that helps a lot already, as you can see in its pros/cons:

Pros

  • The Dockerfile should come within the project. So it tells you already how to run the application, regardless of programming language knowledge.
  • It also tells you which version and image are being used.
  • It inherits the pros from Level One topic if you also apply Level One knowledge.

Cons

  • You still need to find out how to build the application.
  • Which also means you still need to know which commands to run.

It's a good approach. You can merge Level One and Level Two to achieve a better result here. The project should have Dockerfile and life gets a bit easier already. But let's see how life can be even easier.

Level Three: Using Docker for building and running the application

What if you didn't know anything about Java and Maven and you were still able to build the application with just one command you already know?

That's where Multi-Stage builds shine.

With multi-stage builds, you use multiple FROM statements in your Dockerfile. Each FROM instruction can use a different base, and each of them begins a new stage of the build. You can selectively copy artifacts from one stage to another, leaving behind everything you don’t want in the final image.

How can that help us? Well… Let's consider the previous Dockerfile. We needed to have a JAR file to build the Docker image. With Multi-Stage Build, Dockerfile itself can be responsible for generating it. In a simple approach, that Dockerfile would look like this:

# ============= DEPENDENCY + BUILD ===========================
# Download the dependencies on container and build application
# ============================================================
FROM maven:3-jdk-11-slim AS builderCOPY ./pom.xml /app/pom.xml
COPY . /app
WORKDIR /appRUN mvn package $MAVEN_CLI_OPTS -Dmaven.test.skip=true# ============= DOCKER IMAGE ================
# Prepare container image with application artifacts
# ===========================================
FROM openjdk:11-jre-slimCOPY --from=builder /app/target/*.jar app.jarENTRYPOINT ["java", "-jar", "/app.jar"]

Let's see what's happening here.

From the first FROM to the first RUN statement, it's doing things related to Maven: copying the files that needed to be copied and running the command that downloads the dependencies and builds the application. It's doing that by using the maven:3-jdk-11-slim image and it's setting the name builder.

After that, you will see the second FROM statement, which is using nowopenjdk:11-jre-slim image. We see a COPY statement, which is copying from a place called builder. But what's that place? What's that builder?

That's the name we set for the maven image at the first FROM statement. So it's copying the jar file from that container. So you can literally play with different FROM entries to build whatever you want and the command to build the docker image it's still the same:

docker build -t my-application .

Pros:

  • Regardless of programming language, if the project has this approach, you can run the application without installing anything else other than Docker.
  • It inherits the pros from Level One and Level Two.

Worth saying that you can also use this Dockerfile with Docker Compose, which can be really powerful, especially if your application needs to expose ports, sharing volumes or depends on other images.

Appendix: Using Docker for every major command

Now that you know how to play with different FROM statements, another possible Dockerfile could be:

# ============= DEPENDENCY RESOLVER =============
# Download the dependencies on container
# ===============================================
FROM maven:3-jdk-11-slim AS dependency_resolver# Download all library dependencies
COPY ./pom.xml /app/pom.xml
WORKDIR /appRUN mvn dependency:copy-dependencies $MAVEN_CLI_OPTS# ============= TESTING =================
# Run tests on container
# =======================================
FROM dependency_resolver AS testerWORKDIR /appCMD mvn clean test $MAVEN_CLI_OPTS# ============= BUILDER =================
# Build the artifact on container
# =======================================
FROM dependency_resolver as builder# Build application
COPY . /app
RUN mvn package $MAVEN_CLI_OPTS -Dmaven.test.skip=true# ============= DOCKER IMAGE ================
# Prepare container image with application artifacts
# ===========================================
FROM openjdk:11-jre-slimCOPY --from=builder /app/target/*.jar app.jarENTRYPOINT ["java", "-jar", "/app.jar"]

So now we have 4 different steps: dependency_resolver, tester, builder and the application itself.

If we want either build the application or testing it, we need the project dependencies, so there's a dependency_resolver step there. You can see in the second and the third FROM statements that they depend on dependency_resolver.

IMPORTANT: Here's something you need to know:

If you try to build the docker image with docker build -t my-application . , only the first, the third and the last step (dependency_resolver, builder and the application itself, respectively) would run. But why?

When you try to build the image, it will try to see what is the end state of the image, which would be the application itself. As we know and can see, the last step depends on builder (COPY statement). If we check builder step, we will see that it depends on dependency_resolver (FROM statement). So, it will run in this order:

dependency_resolver -> builder -> application

Then… what's the tester step doing there? Is there a way to reach it?

You can specify a target by using --target:

docker build --target tester -t my-application .

This is also compatible with Docker Compose.

Final words

We saw how to use only Docker to build and run application and commands, removing the need of previous knowledge in some tool or programming language. Also worth saying that, even though we used Docker for these examples, this would also work with other container runtimes like Podman. I hope you find this post useful and spread the word about Docker.

Next post we will see how to actually develop using only Docker.

--

--

Raphael Amoedo
Devjam
Writer for

a.k.a. Ralph Avalon — https://github.com/ralphavalon — Developer at OLX — Passions: Best programming practices, Clean Code, Java, Python, DevOps, Docker, etc.