Deploy your Laravel app with BitBucket Pipelines

Inspired by the GitLab post “Test and deploy Laravel applications with GitLab CI/CD and Envoy” I wondered if I could do something similar with BitBucket Pipelines (I was already using BitBucket and I’m too lazy to move).

What I wanted was a way to push a given tag to my repo and have that trigger a pipeline that:

  • Deploys my code to the production server
  • Installs required dependencies (composer & node)
  • Generates / compiles assets (gulp)

Before serving any traffic from the newly deployed version of the system (switching the symlink to the web root).

I decided to stick with Laravel Envoy for running these tasks, but you could use Phing, Capristrano or something else if you’re that way inclined.

Setting up the server

I got started by logging into my production server (Ubuntu 16.04) and creating the bitbucket user that was going to run the deployment:

> adduser bitbucket

After setting a strong password, I added that user to the www-data group so that it had permission to write files into the web root.

> usermod -aG www-data bitbucket

Next on my local machine I generated an SSH key pair and then copied the contents of the public key part into the ~/.ssh/authorized_keys file for the bitbucket user on the production server.

Now that the keys were set up I checked that I could SSH into the server without a password — Success! this meant that I could run the deployment from my local machine to test, and iron out any issues before running it in a pipeline and wasting any of my free 50 minutes of build time.

Editing the Envoy tasks

I made use of the Envoy.blade.php from the original article and added a task to install my JS dependencies with npm before running gulp:

@task('run_js')
echo "Running NPM"
cd {{ $new_release_dir }}
npm install
gulp
@endtask

I also updated the update_symlinks task to reflect the laravel 4.2 directory structure and to take into account a shared uploads directory so that user uploads don’t get lost between deployments:

@task('update_symlinks')
echo "Linking storage directory"
[ -d {{ $app_dir }}/storage ] || mkdir -p {{ $app_dir }}/storage
echo "Deleting {{ $new_release_dir }}/app/storage"
rm -rf {{ $new_release_dir }}/app/storage
echo "Linking {{ $app_dir }}/storage to {{ $new_release_dir }}/app/storage"
ln -nfs {{ $app_dir }}/storage {{ $new_release_dir }}/app/storage
echo "Linking uploads folder"
rm -rf {{ $new_release_dir }}/public/uploads
ln -nfs {{ $app_dir }}/public/uploads {{ $new_release_dir }}/public/uploads
echo 'Linking current release'
ln -nfs {{ $new_release_dir }} {{ $app_dir }}/current
@endtask

Once I’d made these additions I could deploy from my local machine to the production server. This setup didn’t account for the fact that I wanted to deploy a specific tag — so I added a variable that’s used in the clone_repository task:

@task('clone_repository')
[ -d {{ $releases_dir }} ] || mkdir -p {{ $releases_dir }}
@if ($tag)
echo 'Cloning tag {{$tag}}'
git clone --branch {{ $tag }} --depth 1 {{ $repository }} {{ $new_release_dir }} -q
@else
echo 'Cloning repository'
git clone --depth 1 {{ $repository }} {{ $new_release_dir }} -q
@endif
@endtask

Now I could run:

> envoy run deploy --tag=deploy-0.0.1

In order to get that specific branch to be deployed onto the server! Next, onto creating the bitbucket-pipelines.yml!

Creating the pipeline

The first thing you have to do to use BitBucket Pipelines is turn it on on your BitBucket repo. Once it’s enabled, the wizard in the BitBucket interface will prompt you to select a template — I think you can skip this step if you want — I just selected the hello world example.

Background: the bitbucket-pipelines.yml file is a definition that includes some commands to run along with the name of a docker image to run them in. The commands are grouped by branch, tag etc so you can run different commands depending on what has been pushed to the repo. To clarify, here is a simple example:

image: php:7.0.22
pipelines:
tags:
deploy-*:
- step:
script:
- echo "Deploy"

In this example the script would print out the word “Deploy” whenever a new tag starting with “deploy-” is pushed to the repo.

Before I got onto writing out my pipeline, I needed to give BitBucket access to my server, so I generated a new SSH key in the pipelines settings of my repo (the private key is automatically injected into the docker container that runs your scripts — helpful!), and copied the contents of the public key part into the ~/.ssh/authorized_keys file for the bitbucket user on the production server. Finally, I fetched the fingerprint of my server so that it could be added to the known hosts (Step 2 here).

Back to the bitbucket-pipelines.yml file, I changed the hello world example that had been generated for me so that it used the php:7.0.22 image, you can use what you wan’t here but this one came pre-installed with most of the dependencies that I needed. Next I added some scripts to:

  • Install any missing dependencies
  • Install composer
  • Use composer to install Laravel Envoy
  • Run the deploy command

The final version of the file is here (it took me about 10 attempts to get the magic combination of OS and PHP dependencies):

image: php:7.0.22
pipelines:
tags:
deploy-*:
- step:
script:
- apt-get update && apt-get install -y unzip libmcrypt-dev openssh-client
- docker-php-ext-configure mcrypt
- docker-php-ext-install mcrypt
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
- composer install
- composer global require laravel/envoy
- echo "Deploying Application"
- ~/.composer/vendor/bin/envoy run deploy --tag=$BITBUCKET_TAG

The only thing worth noting is that BitBucket helpfully sets the environment variable containing the tag that was pushed — you can find out more about that here.

What next?

This is a pretty simple example of how you can use BitBucket Pipelines to deploy your Laravel app, it’s probably not the most efficient though. If you want to make the most of those 50 free build minutes then it might be worth considering enabling caching, or building a custom docker image that contains all of your dependencies so that they don’t have to be installed on every run.

I’ve also not covered testing because there are already plenty of other tutorials for that: