Setting up Symfony continuous deployment using Rancher

Filali Belhadj Zakariae
15 min readJun 6, 2017

--

Writing our application code is one thing, deploying it is another thing. Throughout the years, the deploy phase of web applications has been done using different strategies, the two strategies I’ve seen a lot were using FTP to each time send the modified source code to the production server, the other approach is versioning the source code with GIT in the production server. An infinite debate comes about pros and cons of both solutions, but let’s face it, production server should contain only production files (no .git folder), and should be easy to maintain, and most importantly, easy to scale.

Today, I will present to you Rancher, which is what we call in the Docker world an orchestration tool, that will help us to achieve our goals. With Rancher, we will create our different environments (staging, production, …) architecture, easily deploy our code, and scale our containers when we need to. This article is part of a series about PHP and containers eco-system.

Let’s rock!

In the last article, we covered the first part (Continuous integration) of our workflow, if the tests pass without a problem from the tests stage, the next task will be to build a Docker image of our code (the app container in the first article), pushing this image to our private Docker repository, and finally using Rancher to automatically upgrade the code image in our different servers.

Rancher

To install Rancher, first, we need to make sure that our server respect some requirements. Next, let’s create a /srv/rancher folder and in this folder, we will create our docker-compose.yml file, which will describe our Rancher server environment.

version: "2"
services:
rancher-server:
image: rancher/server
container_name: rancher-server
restart: always
volumes:
- "./rancher-server/mysql:/var/lib/mysql"
nginx:
image: nginx:alpine
container_name: rancher-nginx
restart: always
depends_on:
- rancher-server
ports:
- "444:444"
volumes:
- "./nginx/conf.d/default.conf:/etc/nginx/conf.d/default.conf"
- "./nginx/certs:/etc/nginx/certs"

The Rancher server is built from the official image, and its database is persisted in the host. The restart: always parameter, will make sure Rancher starts each time our server is restarted.

The Rancher server is served behind a NGINX proxy, to use our SSL certifications we need to expose the certs into our container, and setting them into our NGINX configuration, so let’s create /nginx/conf.d/default.conf file:

upstream rancher {
server rancher-server:8080;
}
server {
listen 444 ssl;
ssl_certificate /etc/nginx/certs/cert.pem;
ssl_certificate_key /etc/nginx/certs/private.key;
server_name rancher.lekode.com;location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://rancher;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}

I use the port 444 instead of the usual port 443 since I deploy the Rancher server in the same machine as GitLab, so I needed to choose another port.

Last thing, let’s create /nginx/certs folder where we will put our SSL certifications files (cert.pem and private.key).

The final folder structure will be like this:

sf-project/
├── nginx/
│ ├── certs/
│ │ ├── cert.pem
│ │ ├── private.key
│ ├── conf.d/
│ │ ├── default.conf
│ │ ├── .password
└── docker-compose.yml

Protip: In the previous article I shared a way of generating SSL certificates automatically for your NGINX proxy using Let’s Encrypt, you can easily do the same thing for the Rancher server installation, or if you already have your certifications files (.crt/.pem and .key files) you can use the configuration from this article.

Finally, we launch our Rancher stack: docker-compose up -d

Before doing any setup, we need to secure our Rancher server, we can add an access control from the ADMIN > Access Control menu.

Environments setup

In our case, I suppose we need two environments, one for staging which we will upgrade automatically (Continuous Deployment) each time we push new code to our master branch, and a production environment which will be upgraded manually (Continuous Delivery) each time we push a new version tag to our GitLab.

Protip: A lot of people are still confused about Continuous Deployment and Continuous Delivery, there is a lot of great articles on the internet expressing the major difference between both approaches, a picture that summarize the difference is created by Yassal Sundman.

To add a new environment, first, we go to Manage Environments in top-left of our Rancher UI, and click on Add Environment. A great approach to name our environment is project-type if we manage many projects from the same Rancher server, or just type if we manage only one project (type can be test, staging, production).

In this case, we manage one project so our environments gonna be Staging and Production.

Next, we need to choose the orchestration tool to use with Rancher, each tool from the list have its own pros and cons, and maybe one of the most used is Kubernetes (which I’ll expose in a different article), Cattle is the core orchestration engine for Rancher, and it’s the one we will use in this article.

After creating our environment, we need to add a host for it from the Infrastructure > Hosts > Add Host menu. The host must have a supported Docker version installed on it. Follow the instruction from the Add Host page and we will be good to go.

If everything is OK, we can see our host in the Infrastructure > Hosts page.

In Cattle, a Stack is a group of services, in our case we will have an Applicationstack (You can name it as you want), where will have our different services (NGINX, php-fpm, PostgreSQL, and load-balancers), we will also have a Let’s Encrypt stack to generate SSL certificates for applications domain name.

Docker registry

Before creating our stacks, we will need a place to push our different Docker images, there are many paid Docker registries that we can use, here is a good comparison article. Docker also propose a nice Registry that we will use in this article.

To install Docker registry, let’s create a /srv/registry/ folder into our server, in this folder we will create a docker-compose.yml file:

version: "2"
services:
nginx:
image: nginx:alpine
container_name: registry-nginx
ports:
- "5000:5000"
restart: always
depends_on:
- registry-server
volumes:
- "./nginx/conf.d:/etc/nginx/conf.d"
- "./nginx/certs:/etc/nginx/certs"
registry-server:
image: registry:2
container_name: registry-server
restart: always
ports:
- 127.0.0.1:5000:5000
environment:
REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /data
volumes:
- ./data:/data

The registry-server service will be based on the official Docker Registry image, and we will persist its data into our host server. We will also use NGINX as a proxy with the following configuration:

upstream docker-registry {
server registry-server:5000;
}
server {
listen 5000;
server_name registry.lekode.com;
# SSL
ssl on;
ssl_certificate /etc/nginx/certs/cert.pem;
ssl_certificate_key /etc/nginx/certs/private.key;
# disable any limits to avoid HTTP 413 for large image uploads
client_max_body_size 0;
chunked_transfer_encoding on;
location /v2/ {
if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) {
return 404;
}
# Basic authentication
auth_basic "registry.localhost";
auth_basic_user_file /etc/nginx/conf.d/.password;
add_header 'Docker-Distribution-Api-Version' 'registry/2.0' always;
proxy_pass http://docker-registry;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 900;
}
}

The configuration that needs some attention is the basic authentication, first we need to create a folder where we will generate our password file, and then generate it:

mkdir -p nginx/conf.d
docker run --entrypoint htpasswd registry:2 -Bbn myusername mysecret > nginx/conf.d/.passwordLet’s run our registry by:
docker-compose up –d

To test our repository, we can curl it:

curl http://myusername:[email protected]:5000/v2/

If everything works well, we will receive an empty JSON object: {}

Now, we can add our registry to Rancher from the Rancher menu Infrastructure > Registries > Add Registry
To use our custom registry, let’s choose Custom and fill the registry information.

Services

In general, the production environment will differ from the development one a little bit, for example in PHP, there is no need for xDebug neither Composer, and for NGINX, we will need to specify the server name, adding SSL certificates … For this reason, we will create different images for production services and push them in the Docker registry we created above.

To stay well-organized, let’s create a new root folder where we will put all our Docker images source, inside this folder we will now have two sub-folders; one for php-fpm image, and the other one for NGINX.

mkdir -p registry-images/php-fpm mkdir registry-images/nginx

php-fpm

We will almost use the same Dockerfile from the first article with minor changes, so in our local environment let’s create registry-images/php-fpm/Dockerfile:

FROM php:fpm-alpineMAINTAINER Zakariae Filali <filali.zakariae@gmail.com>ENV WORKDIR "/var/www/app"RUN apk upgrade --update && apk --no-cache add \
autoconf tzdata openntpd libcurl curl-dev coreutils \
libmcrypt-dev freetype-dev libxpm-dev libjpeg-turbo-dev libvpx-dev \
libpng-dev openssl-dev libxml2-dev postgresql-dev icu-dev
RUN docker-php-ext-configure intl \
&& docker-php-ext-configure opcache \
&& docker-php-ext-configure gd --with-freetype-dir=/usr/include/ \
--with-jpeg-dir=/usr/include/ --with-png-dir=/usr/include/ \
--with-xpm-dir=/usr/include/
RUN docker-php-ext-install -j$(nproc) gd iconv pdo pdo_pgsql curl bcmath \
mcrypt mbstring json xml xmlrpc zip intl opcache
# Add timezone
RUN rm /etc/localtime && \
ln -s /usr/share/zoneinfo/UTC /etc/localtime && \
"date"
# Add php.ini and opcache configuration
ADD php.ini /usr/local/etc/php/
ADD opcache.ini /usr/local/etc/php/conf.d/
# Cleanup
RUN rm -rf /var/cache/apk/* \
&& find / -type f -iname \*.apk-new -delete \
&& rm -rf /var/cache/apk/*
RUN mkdir -p ${WORKDIR}COPY entrypoint.sh /opt/entrypoint.sh
RUN chmod +x /opt/entrypoint.sh
WORKDIR ${WORKDIR}EXPOSE 9000ENTRYPOINT ["/opt/entrypoint.sh"]CMD ["php-fpm"]

I added an entrypoint.sh script to set the right permissions for our app folder, its content is inspired from the official php-fpm entrypoint:

#!/bin/sh
set -euo pipefail
chown -R www-data:www-data $WORKDIR# first arg is `-f` or `--some-option`
if [ "${1#-}" != "$1" ]; then
set -- php "$@"
fi
exec "$@"

In the same directory let’s create a registry-images/php-fpm/php.ini configuration file:

short_open_tag = Off
magic_quotes_gpc = Off
register_globals = Off
session.auto_start = Off
upload_max_filesize = 100M
post_max_size = 100M
max_file_uploads = 20
max_execution_time = 30
max_input_time = 60
memory_limit = "512M"

And finally, let’s configure opcache extension using a registry-images/php-fpm/opcache.ini file:

opcache.memory_consumption=128
opcache.interned_strings_buffer=8
opcache.max_accelerated_files=4000
opcache.revalidate_freq=60
opcache.fast_shutdown=1
opcache.enable_cli=1
opcache.enable=1

Now, we need to build our php-fpm image with a tag, login to our registry, and push our image into it. From the php-fpm folder:

docker build --tag="registry.lekode.com:5000/php-fpm:latest"
docker login -u 'myusername' -p 'mysecret' registry.lekode.com:5000
docker push registry.lekode.com:5000/php-fpm:latest

NGINX

In the nginx folder, let’s create a Dockerfile:

FROM nginx:alpineRUN rm /etc/nginx/conf.d/default.conf
ADD nginx.conf /etc/nginx/nginx.conf
ADD default.template /etc/nginx/conf.d/default.template
COPY entrypoint.sh /opt/entrypoint.sh
RUN chmod +x /opt/entrypoint.sh
ENTRYPOINT ["/opt/entrypoint.sh"]

Here, we use a specific entrypoint.sh file, to set some parameters (server name and php engine host):

#!/bin/sh
set -euo pipefail
# Checking image parameters
MISSING=""
[ -z "${SERVER_NAME}" ] && MISSING="${MISSING} SERVER_NAME"
[ -z "${PHP_HOST}" ] && MISSING="${MISSING} PHP_HOST"
if [ "${MISSING}" != "" ]; then
echo "Missing required environment variables:" >&2
echo " ${MISSING}" >&2
exit 1
fi
bin/sh -c "envsubst '\$SERVER_NAME \$PHP_HOST' < /etc/nginx/conf.d/default.template > /etc/nginx/conf.d/default.conf"nginx -g "daemon off;"

The entrypoint.sh script use a NGINX configuration template, so let’s create it in registry-images/nginx/default.template we will put:

server {
server_name ${SERVER_NAME};
root /var/www/app/web;
location / {
try_files $uri @rewriteapp;
}
location @rewriteapp {
rewrite ^(.*)$ /app.php/$1 last;
}
location ~ \.php(/|$) {
fastcgi_pass ${PHP_HOST}:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS off;
}
error_log /var/log/nginx/lekode_error.log;
access_log /var/log/nginx/lekode_access.log;
}

Now, let’s build our NGINX image and send it to our registry. From the NGINX folder:

docker build --tag="registry.lekode.com:5000/nginx:latest"
docker push registry.lekode.com:5000/nginx:latest

Please note here that we didn’t log in to our Docker registry since we already did that in the php-fpm image part.

Application

Before building our initial application image, Docker, like GIT, have a .dockerignore file, that can contain all files and folder we want to ignore in a Docker build command, for our application image we can ignore GIT files, CI configuration, and Dockerfile. In a .dockerignore let’s put all this:

Dockerfile
.dockerignore
.gitlab-ci.yml
.git
.gitignore
README.md

Finally, we need to tag our application image with two tags that we will use in our environments, staging with the latest tag, and production with the version tag, let’s navigate to our app/ folder, build and push the image:

docker build --tag="registry.lekode.com:5000/app:latest" --tag="registry.lekode.com:5000/app:v0.0.1"
docker push registry.lekode.com:5000/app:latest
docker push registry.lekode.com:5000/app:v.0.0.1

Application stack

Now, we can create our application stack using Rancher UI, from the staging environment, let’s click on Add stack. We can use whatever name we want for our stack, what is awesome with Cattle is the fact that we can use our docker-compose.yml file to automatically create our stack, so let's use the one from the first article about Docker development environment, the changes are going to be about the images used in the different services.

PostgreSQL

The major change here is the label io.rancher.container.pull_image which in this case set Rancher to pull Postgres image every time the container is started.

postgresql:
image: postgres:alpine
environment:
POSTGRES_DB: lekode
POSTGRES_USER: lekode
POSTGRES_PASSWORD: secret
stdin_open: true
volumes:
- /application/postgresql/data:/var/lib/postgresql/data
tty: true
labels:
io.rancher.container.pull_image: always

NGINX

Unfortunately, Rancher doesn’t support the volume_from parameter using volume from another service (app service in our case), unless this service is a sidekick of the parent service. So we will omit our app service and create two sidekicks, one (nginx-app) for NGINX service, and another one (php-fpm-app) for php-fpm service.

Here, we will use the images we build earlier and pushed into our private registry.

nginx:
image: registry.lekode.com:2087/nginx:latest
environment:
SERVER_NAME: symfony.lekode.com
PHP_HOST: php-fpm
stdin_open: true
tty: true
volumes_from:
- nginx-app
labels:
io.rancher.container.pull_image: always
io.rancher.sidekicks: nginx-app
nginx-app:
image: registry.lekode.com:2087/app:latest
volumes:
- /var/www/app
labels:
io.rancher.container.pull_image: always
io.rancher.container.start_once: 'true'

php-fpm

Like NGINX service, we will use a sidekick to mount our application files to the php-fpm service, we should not forget to link the postgresql service.

php-fpm:
image: registry.lekode.com:2087/php-fpm:latest
environment:
DATABASE_HOST: postgresql
DATABASE_PORT: '5432'
POSTGRES_DB: lekode
POSTGRES_USER: lekode
POSTGRES_PASSWORD: secret
stdin_open: true
tty: true
links:
- postgresql:postgresql
volumes_from:
- php-fpm-app
labels:
io.rancher.container.pull_image: always
io.rancher.sidekicks: php-fpm-app
php-fpm-app:
image: registry.lekode.com:2087/app:latest
volumes:
- /var/www/app
links:
- postgresql:postgresql
labels:
io.rancher.container.pull_image: always
io.rancher.container.start_once: 'true'

Some notes about sidekicks services. If we use sidekick as data containers, the sidekick must start once without tty, also the sidekick must be set to its parent container using io.rancher.sidekicks: php-fpm-app label.

Protip: You can also create the different stack services using the Rancher UI, just click on Add Service button, and fill the service information.

Load balancers:

Before setting our https load balancer, we need to first manage the SSL certificates, if you have your own certificates, you can add them to Rancher from the menu Infrastructure > Certificates. If not, you can use Let’s Encrypt from Rancher catalog, first let’s create a new Let’s Encrypt stack into our environment Catalog > All > Let's Encrypt, we need to fill our information and choose the Domain Validation Method that will depend on your domain provider. Once the stack is launched, the certificates will be created and added to our Rancher certificates list.

Back to our Application stack, let’s add new load balancer Add Service Menu > Add Load Balancer

Our load balancer should target the NGINX service, and have an SSL certificates.

Once the staging environment is set, let’s create our production environment following the same steps, without forgetting to change our application service image from registry.lekode.com:2087/app:latest to registry.lekode.com:5000/app:v.0.0.1

Docker build/deploy runner

In the previous article about continuous integration, we used our first GitLab CI/CD Runner Symfony demo Runner, this runner will be used to execute test tasks. The build and deployment tasks need a special runner based on Docker (Since this runner will execute Docker commands to build and push our images), to register our new runner:

docker exec -it gitlab-runner gitlab-runner register -n \
--url https://gitlab.lekode.com/ci \
--registration-token Secret-token \
--tag-list "build,deploy" \
--executor docker \
--description "Docker Builds Runner" \
--docker-image "docker:latest" \
--docker-privileged

Note that to use Docker commands we need to set docker-privileged option.

Once our runner is registered, we need to set up our Docker private registry authentication in GitLab CI/CD, and that’s by adding a DOCKER_AUTH_CONFIG environment variable from the GitLab UI Settings > CI/CD Pipelines > Environment variables. As mentioned in the doc, we can get the content of this variable from our Docker registry:

docker login registry.example.com --username my_username --password my_password
cat ~/.docker/config.json

GitLab CI/CD

If you’re still reading this article, congrats champ the hard part is done, but let’s take a minute for all our brothers we lost during the configuration 😭

The final part of this article will be about configuring GitLab CI/CD to build our application image when the tests passes, we’ll upgrade the right Rancher environment, so in our gitlab-ci.yml let’s add some variables, and set our different stages:

variables:
REGISTRY_URL: registry.lekode.com:5000
IMAGE_NAME: app
BUILD_CODE_IMAGE: $REGISTRY_URL/$IMAGE_NAME:$CI_COMMIT_REF_NAME
LATEST_CODE_IMAGE: $REGISTRY_URL/$IMAGE_NAME:latest
stages:
- test
- install
- build
- deploy

GitLab set some helpful environment variables, we will use CI_COMMIT_REF_NAME which is the branch or tag name for which project is built (in our case it will be either master or the tag version).

Staging workflow

The Staging job will be triggered each time we push code to master branch, and from our test build files (versioned code + Artifacts generated) we will build our application Docker image and push it to our private repository, so in our gitlab-ci.yml let’s add our staging job:

build-staging:
stage: build
dependencies:
- test
only:
- master
image: docker:latest
tags:
- build
services:
- docker:dind
script:
- docker login -u $REGISTRY_USERNAME -p $REGISTRY_PASSWORD $REGISTRY_URL
- docker build --tag="$LATEST_CODE_IMAGE" .
- docker push $LATEST_CODE_IMAGE

We need to tag this job with the build tag, that will set the Runner tagged with the same tag to run the job. We also need to set our Docker private registry username and password (REGISTRYUSERNAME, REGISTRYPASSWORD) as secret variables in GitLab CI/CD.

Finally, we need to upgrade our Rancher services, to do that, we can use either Rancher CLI, or this fancy Rancher GitLab Deployment Tool. Both tools are based on Rancher API. For our case, we will use the second tools which have a Docker image. Our deploy stage will look like this

deploy-staging:
stage: deploy
dependencies: []
tags:
- deploy
image: cdrx/rancher-gitlab-deploy
services:
- docker:dind
only:
- master
script:
- upgrade --stack Application --service php-fpm --finish-upgrade --sidekicks --new-sidekick-image php-fpm-app $LATEST_CODE_IMAGE
- upgrade --stack Application --service nginx --finish-upgrade --sidekicks --new-sidekick-image nginx-app $LATEST_CODE_IMAGE

To use cdrx/rancher-gitlab-deploy we need to add our Rancher authentication parameters as secret variables in GitLab CI/CD. Rancher offer two types of keys, account keys and environment keys, we can use either with this tool, just to note that if we use the account keys, we need to specify the environment in our commands. In this case, we will use environment keys that we can create from our Rancher UI menu API > Keys > Advanced Options > Add Environments Keys. Now we can add our Access Key and Secret Key and the Rancher host link to our GitLab secret variables:

RANCHER_URL=https://rancher.lekode.com:444 RANCHER_ACCESS_KEY=Rancher Access Key 
RANCHER_SECRET_KEY=Rancher Secret Key

The upgrade command need the stack name, the service to update and in our case; we only update the sidekicks' images (our application image) and we force Rancher to finish the upgrade.

Production workflow

The production environment will not need development dependencies; instead, we will need to warm up Symfony’s cache for this environment. Let’s create a prepare-production job for this:

prepare-production:
stage: install
image: kariae/symfony-php
dependencies: []
tags:
- test
only:
- tags
services:
- postgres
variables:
POSTGRES_USER: lekode-test
POSTGRES_PASSWORD: lekode-test-pass
POSTGRES_DB: lekode-db
DATABASE_HOST: postgres
DATABASE_PORT: "5432"
artifacts:
expire_in: 1 day
paths:
- vendor/
- app/config/parameters.yml
- var/bootstrap.php.cache
before_script:
- composer config cache-files-dir /cache/composer
cache:
paths:
- /cache/composer
- ./vendor
script:
- export SYMFONY_ENV=prod
- composer install --no-dev --optimize-autoloader
- php bin/console cache:clear --env=prod --no-debug
- php bin/console cache:warmup --env=prod --no-debug

Finally, for our application image, we will tag it with $BUILD_CODE_IMAGE instead of $LATEST_CODE_IMAGE and will update the Production environment.

One last step: Database schema and migrations

The last step for our deployment operation is to deploy database schema and run our different migrations, we need to connect to our php-fpm container and run the required commands for that.

You can find the full code for the `.gitlab-ci.yml` [here](https://gist.github.com/kariae/09ab219d80c38206775d831df2684bbf).

Conclusion

I know, it may seems a lot of work to do, but the gain you’ll get for the long run is huge, this workflow is the one I actually use for Symfony projects I lead. Sure it can be improved, but for now, it’s doing more than I want.

If you have any question or note about this setup, please feel free to write a comment here.

References

Originally published at lekode.com on June 6, 2017.

--

--