How we build our Magento 2 environment using Docker and continuous integration

ju5t
10 min readJan 22, 2018

--

In 2017 we began working on a self-service platform based on Magento 2.

We’re not developers, but we do happen to know a thing or two about hosting applications. We had seen lots of discrepancies between deployments in previous platforms and that fueled the decision to use Docker for development, acceptance and production.

This is a short story about our journey to run Magento 2 in Docker.

Ps. we had no idea what we got ourselves into.

Choosing a suitable Docker image

One thing was clear from the start. Our Docker image should only include the bare minimum to host Magento. We didn’t want to include other processes in the same container as our application. Not only is it bad practice, it also makes scaling more difficult. If you only have one application in your image that’s all you need to worry about. And that’s plenty already.

This requirement meant we had to find a solution for our database, storage, email and cron, because, well — you need them.

We already had a few database servers so that was an easy pick. Our static files are stored on S3-compatible storage from Fuga, our sessions are kept in Redis and e-mail is sent out with SMTP. That’s all pretty straight forward.

But cronjobs aren’t. They can often only run from a single location; duplicate crons are likely to cause havoc. We needed a solution for this but we thought it made sense to cross that bridge when we got there. So we will.

Unfortunately, none of the available Docker images met our requirements, as they would either deploy several services in a single container or had no intent to be used in production or at scale. Docker seemed to be used a lot for development and that didn’t tick all boxes for us.

We decided to make our own image.

How to dress in layers

Docker loves layers. The foundation of our application is based on an existing image being php:7.0-apache. We added all requirements to run Magento 2 to it and published a new image that in turn can be extended again. This doesn’t sound too difficult but proved to be far more challenging than we thought.

We wanted to use Docker for development, acceptance and production. This meant we needed fast, but above all working, initial deployments. Starting MySQL and Magento together should result in a reusable development or test environment. Once it’s running in production though, it shouldn’t run unintented, not be confused with unattended, database upgrades.

All of this is done through our ENTRYPOINT. We won’t go into the specifics of Docker here but our entrypoint.sh bash script runs on each start. All of the required logic is pushed into ~100 lines of code.

It took a few dozen runs before all pieces fell in place.

Let’s go through afew of its features. entrypoint.sh is smart enough to wait for the database to become available. It checks the output of bin/magento setup:db:status for uninstalled plugins and installs them only if UNATTENDED is set. It knows what to do in production ánd development environments and last but not least, it includes cron.

Wait, what?

Yes. It includes cron.

Cronjobs in Docker

We didn’t plan on including cron in the same image when we started. For obvious reasons we didn’t want to have multiple services in the same container — it’s bad when you want to scale, remember?

Cron is no exception.

But Magento’s scheduled tasks need access to the preferably same source code as what runs your front facing shop. It made sense to add cron to the same base image to keep things DRY.

So we broke our own rule.

We added cronjobs with one exception: the image can only run one service at the same time. This made sure we could scale our web containers without having to worry about crons running simultaneously. This made having two services in the same image acceptable. For us at least.

The source of it all: Magento

At this point we had a working base image without Magento’s source code.

We never intended to add the source code of Magento to our base image as extending a base image is much more powerful and flexible. A basic version of a Dockerfile running our application now looks like this.

FROM sensson/magento2
COPY src/cron /etc/cron.d/magento2
COPY src/ /var/www/html/

So how do we get Magento’s source in src?

We needed to be sure that we are always running the same version.

At first we forked the magento/magento2 git repository and copied its contents to /var/www/html. With our knowledge at that time it seemed to make sense compared to managing a zip-download that we often read about. Little did we know.

The Magento-repository contained close to 60,000 commits and 1,500,000 objects. It had no impact on our deployment but mixing our own commits with Magento was wrong. Even though we never touched the core it didn’t really fit our Docker ambition to build on top of existing solutions. Our changes were small but the repository was huge. We sometimes struggled looking for our own files. Even though git is great, we needed something else.

We had to go back to the drawing board.

Our initial Magento-git-based setup used Composer to keep track of the plugins it required. Somewhere we had missed a turn when Composer was explained as a way to manage Magento too. So that’s what we changed. We rewrote our environment to use a single, smaller composer.json and gave full control to Composer. composer install is all we need now to manage src.

Source control

The structure below is a snippet of what we keep in source control.

.
|-- Dockerfile
`-- src
|-- app
| `-- etc
| |-- config.php
| `-- env.php
|-- composer.json
`-- cron

The two files in app/etc/ turned out to be special. Again this is something we didn’t knew when we began and in fact our first deployments had these files only configured when the container started. This doesn’t sound too bad, but we found out that this had a few side effects.

The biggest of which was the fact that it ran the installer each time the container started. This didn’t reinstall an existing installation but it did change the default admin password back to the one specified in your deployment. That’s definitely not good. On top of it the start of new containers would be considerably slower.

Adding env.php and config.php to your image solves this.

So, what do these files do?

Deployment configuration

env.php and config.php are referred to as a deployment configuration.

env.php holds credentials to your database amongst specific settings about your environment such as caching. entrypoint.sh, what starts our containers, is used to set our database credentials which are configured through environment variables that are passed on to containers in production, test and development environments.

Don’t store credentials in git.

Our env.php only includes theinstall and cache_types-keys.

<?php
return array(
'install' => array('date' => 'Tue, 21 Nov 2017 15:06:38 +0000',),
'cache_types' => array(
'config' => 1,
'layout' => 1,
...
),
);

We found this to be the minimum requirements. We haven’t had any problems since, but there could be other useful settings that you want to keep in source control.

config.php is called a shared configuration. It contains the list of installed modules, themes, and language packages; and shared configuration settings.

Continuous integration and how it fitted in our development process from test to production

Docker was not new to us when we began working on our Magento implementation. Our API and a few other services were all Docker-based already. We knew how to build images from scratch and how to deploy them automatically with continuous integration.

We use Gitlab CI for all of our builds.

We only hit a few minor bumps down the road.

When we initially build our Docker image we ran composer install at the start of each container. Not the brightest idea. Your containers will not boot as fast you like and that is not what we hoped for. But that is not even the most important reason to move it out of entrypoint.sh. Running composer install on each start means your image won’t be consistent each and every time. That’s bad. This had to be done in the preparation stage, before we build the container.

That seemed straight forward. We added PHP and Composer to our build container — yes, we build our images in a Docker container — to make sure we could install all plugins to src before building the image. That also meant our build image needed access to our custom plugins and Magento’s repository.

To access our custom plugins we needed to setup new deploy keys. This process is explained in the documentation of Gitlab. Deploy keys are used to allow build processes read only access to repositories other then their own. As we had them this was a straight forward process for us.

Composer

Composer was still relatively new to us. The first thing we ran into was authentication. We needed to access repo.magento.com and that meant a username and password had to be injected in our image and stored for Composer to use it. Gitlab offers secret variables for this purpose.

We just had to add them to our build.

- composer config -a -g http-basic.repo.magento.com $COMPOSER_REPO_MAGENTO_COM_USERNAME $COMPOSER_REPO_MAGENTO_COM_PASSWORD

But it didn’t work. Our builds had PHP syntax errors. We used Docker. Why did Docker fail us?

As usual with computers, Docker did exactly what it was told. It built the container based on the instructions it was given. The problem was not Docker, it was us.

Our build container is Alpine-based. As I mentioned before we added PHP and Composer before we prepared our src directory. This installation is done with the following one-liner.

- apk add php7 php7-curl php7-openssl php7-json php7-phar php7-dom php7-iconv php7-mbstring php7-zlib php7-ctype php7-gd php7-simplexml php7-mcrypt php7-intl php7-xsl php7-zip php7-pdo_mysql php7-soap php7-xmlwriter php7-tokenizer php7-xml && curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/bin --filename=composer

This did exactly what we wanted. It installed all of the dependencies we needed. But it installed PHP 7.1. Yes, 7.1. And we were running 7.0 in the final image that we copied src to. We were using the wrong PHP version.

At first we tried to switch to our base image to prepare our build. But we needed Docker and that meant we had to install it in our base image. No, we had to find another solution. As it turned out you can actually tell Composer which PHP version it should advertise to the repositories it uses. We added a the snippet below to our composer.json and that solved the issue.

"config": {
"platform": {
"php": "7.0.23"
}
},

Our CI now builds perfect images that we can reuse in development, test and production, as many times as we like.

Development

Having a Docker image is great but above all it has to work. Google recently launched Container Structure Tests. This is a useful addition but we needed to be sure that our container runs — and returns a ‘200 OK’ when you try to access it.

Until now having a Docker container didn’t add much. We have done a lot of work to build a container where you could build a server for Magento in a few hours too. It’s time for it to finally add value to our development process.

After the image is build it should be ready to rock and roll. That’s what the image is build for. So that’s what we do. With Docker Compose we can build and setup new environments running just a few commands. Our docker-compose.yml includes a database, frontend and cron container. If you want to test a feature all you need to run is docker-compose build && docker-compose up to get started.

Integration tests

Testing if it works shouldn’t require you to type — at all.

We have automated a similar process as our development process during our build. Instead of opening a browser when the environment starts we run a set of curl-commands that mimic the tests we would otherwise run by hand.

Again — this was easier said than done.

What made it difficult was the fact that we are running Docker in Docker for our tests. Even though the container running CI has curl installed it turned it had no access to the actual deployment. The solution to this was easy in hindsight. We now run our curl-command within a container that has been given access to the host network with --net host. This would be a near exact simulation of how a real visitor would reach our website.

Everything combined our simplified integration test looks like this:

docker-compose build
docker-compose up -d
echo "Waiting on the environment to come online"
sleep 240
docker logs magento-frontend || true
docker run --rm --net host appropriate/curl -I -s localhost:8080 | \
grep '200 OK'
docker-compose down

This is extendable of course. We can add as many curl commands as we need.

If the test passes we take our integration environment offline, tag the image with the branch it was build for and push it to our Docker registry.

And finally, we have our image.

Closing words

Although we are still developing our portal this has helped us a lot. We can now build, test and deploy new environments automatically knowing that they will be consistent throughout the entire process.

It’s not perfect yet. We’re not testing how traffic will move through our load balancer, we should automate rollbacks and it would be great if we could scale Redis on our cluster automatically. But I think we’re close.

The next step? Scale up. But now we are using Docker — how hard can it be?*

*We know. It won’t be.

Open source

Our base Magento image is open source. It is available on Docker Hub and you can use it with sensson/magento2.

We published a working example of what we have done in GitHub for those of you who prefer reading code instead.

--

--