Web Development With Docker

Fernando Andrade
8 min readDec 6, 2021

--

On my last article I told my story with development environments. On this article I’ll take a more practical approach.

Lets start from the beginning and build a simple “hello world” app, we are running it in Docker, therefore we are building a docker image, building its container and mounting a volume so that we have a nice experience while coding.

A long time ago LAMP star wars intro animation

Back in the day for this kind of thing you would have to setup a full server environment on your work computer. There are some simple ways to do that if you just google “how to set up a LAMP stack” you probably will find a simple way to set it up. I know that because that is exactly how I first started. LAMP is short for Linux Apache MySQL and PHP, if you are using other operating systems you can search for XAMPP. We are going to do this in docker but I recommend doing this process at least once in a virtual machine preferably using linux since it most probably will be your production environment. It is not the 90’s or the 2000’s and it is way easier to set up and use linux, try Ubuntu to begin with and allow yourself to get familiar with it make mistakes and destroy it without consequences. Linux is powerful and wont be taking you by the hand. Make good use of a search engine like google or duckduckgo they are your best friends in this kind of adventure.

There are bundles that install all the components for you, but I would recommend to install everything manually, it is more time consuming but gives you more insight on what is happening. On top of that it is a nice exercise to get used to the command line. In the end using some shortcut or not you end up with the Apache web server, PHP and MySQL database server.

After having the LAMP stack installed, configured and having the server running you can access your localhost on your browser.

Then on the folder /var/www/html you could create a file named index.php with <?php echo “hello world”; as its content. You could now go to the browser and refresh and in theory your browser would greet you with the message we just wrote on the index file.

The disadvantage of this approach is that with different computers you may end up with different versions of some packages and it can cause problems in one machine that does not exist in another. That is particularly true if we take in consideration that the development environment often differs from production, making problems arise in production that were not detected in development, complicating the debugging process and giving you unnecessary headaches.

How do we solve this discrepancies?

Virtualization

Enter Virtualization, if you have the same set up as in the server locally on your development environment you can detect possible problems in development preventing them to reach production.

Initially Vagrant was the solution, taking advantage of a headless VirtualBox Hypervisor and clever, almost magical, scripting it delivered just that. You could have a full server on your development machine, it didn’t matter what kind of machine you used since in the end you had a visualized computer with the same configuration as you run on production.

No more “It works on my machine.”

But Linux has a magical way to allow to containerize applications by sharing the kernel of the operating system and allowing to mount a complete new set of tools on top of it. You could sandbox the applications making them think they were running on a completely different Linux distribution while sharing the host kernel.

This way we do not need to add the additional layers of the hypervisor, tossing away that resource expensive layer in order to virtualize an environment for out app. This means less RAM and disk consumed allowing to run more containers with more apps at the same time.

A containerized application is sand boxed from your host machine. This is a neat feature if you like to keep your computer clean of libs and programs that are only useful for your web apps. For example if you are writing an app in PHP8 but still have to maintain some other project with PHP5.4, you can have both projects with all their dependencies neatly contained in different containers.

Containers vs VM from docker documentation

Docker

Docker takes advantage of the Linux containers, it brings a framework to write and share container images in an easy way. On top of that the images can be written on top of each other making it convenient to simply re use existing images. You can find these images on Docker Hub there are many images to play with and build on top of. For our project we will be using the official PHP image more specifically one that brings the apache web server already installed.

Assuming you followed some guide to install and run docker on your machine, lets pick up our simple hello world project and containerize it.

Create a project folder, anywhere you desire. Inside create another folder called src and place your index.php inside it.

Next to your src folder, create a text file called Dockerfile we are now going to define a docker image to build our containers from. Think of it as a recipe that allows to describe our server to docker to create our containers. Your project folder should look like so:

Project
|-src
| |-index.php
|-Dockerfile

Now open your Dockerfile on your favorite text editor and write the following:

FROM php:8.1-apache
COPY src/ /var/www/html
EXPOSE 80

The FROM keyword tells docker to that our image is based on the image from the php project with the tag 8.1-apache. We must be careful with the tags because we may get unnecessary trouble if the php version or the apache version is bumped up and our project is not ready for that. so avoid the tag latest for important things and be quite specific if you must depend on very specific versions of your dependencies. Normally the Docker hub page of the image you are using have a list of tags in it, the PHP one has a list so big the link it to the github documentation page. Any way this will come with time and you will build your experience as you tinker and explore docker.

For now lets just continue to our next step, building our container from the image we just wrote. For that open your terminal and navigate to your project folder then run the following command:

docker build -t hello-docker .

With this command we invoke docker and tell it to build the image located in the current folder . and tag it with the name hello-docker . If you are new to Linux that last dot just indicates the current folder.

Docker now will download the parent image and add our specifications to make this image ours. So in this process it will COPY the contents on the src/ folder into the folder /var/www/html/ inside the image. And in the last step it will EXPOSE the port 80 to the outside world this way our image now accepts traffic on that port.

Now that we have an image of our own lets run a container based on this image, for that run:

docker run -p 80:80 hello-docker

As you may have guessed this command tells docker to create a container from the image hello-docker, the -p <host port>:<container port> tells docker to forward all traffic from the host machine on port 80 to the port 80 of our container. If now you open your browser and visit http://localhost you will be greeted by your PHP app.

the output of our hello world php script

🎉 You just created your first container.

But there is a problem, if you now try to modify your script and refresh the page your changes are not taking effect. That is because for that to take effect you would have to re run the first command to rebuild the image and then run this new container. That is not very useful and kinda makes it hard to code.

Introducing Volumes, docker volumes can be a complex subject but we are not going to dive that deep today. But I will at least tell you that there are 2 types of docker volumes, the first type allows to share data between containers and the second one allow us to share a folder from our host machine into the container this way we can easily edit our files during development.

For that we are going to add a new option to the docker run command:

docker run -p 80:80 -v $(pwd)/src:/var/www/html/ hello-docker

The new parameter we pass is the -v <local folder>:<folder in container> the folders paths must be the full paths and it will not accept relative paths. On the example command this bit $(pwd) is executing the command pwd witch returns the current folder full path, and using this text output. So if you have your project on the folder /home/Alice/hello-docker-project the command being passed to docker is:

docker run -p 80:80 -v /home/Alice/hello-docker-project/src:/var/www/html/ hello-docker

It goes breaks the line and was the reason why I used the shortcut with pwd. But now you can change the contents of your script and have them reflected on your browser, try it. I changed mine to say “hello docker”.

Script output from with the volume mounted

Before you push a container to production it is important to remember some things, first and foremost we are editing the file on our machine and this folder was mounted into the container, our image still has the old version of the file that was used on build time. We never copied our new code into the image so do not forget to rebuild before pushing the image to production. Second the life of a container is limited to the time span of a single process, in our case it is a server so it keeps running until we use ctrl-c to kill that process, but if we were using a container to run tests, ass soon as the process terminates the container is also gone. Containers are lightweight so we can run several at the same time, but always limit your container to one process at a time.

This article is already long, in it we over viewed how we would setup a development environment in the past, how to make and run our first containers and how to mount volumes in our containers so that we can change our code and preview it on the browser. On the next article we can explore a more complex application that connects to a database and uses more containers. For that we are using a new tool called docker-compose.

Your next steps should be explore more images and play with them, for instance try to use the Nextcloud image and try a self hosted app that works like Dropbox.

Thanks for reading.

--

--