Cloud Engineering on GCP as a Symfony Developer

Hello there, today we are going to talk about Symfony development in the cloud using Google Cloud Platform. This post will describe how to:

  • Start a project on Google Cloud Platform
  • Setup a basic Continuous Integration and Deployment system
  • Configure Logging
  • Hopefully, have some fun

Start the project

Let’s start with the inevitable and go to the GCP console and create a new project

Then go to the App Engine section and enable the API. You will have a very quick tutorial on the right that you can follow (or ignore as we will cover the subject of using the gcloud app command in a moment).

It’s now time to bootstrap our Symfony application:

$ composer create-project symfony/skeleton blog && cd blog
$ composer require dotenv annotations template log google/cloud
$ composer require web-server-bundle maker tests --dev

You should now be able to see your website running locally by launching the built-in web-server. That means it’s time to deploy our application on the cloud (how epic):

# app.yaml
runtime: php
env: flex
APP_ENV: 'prod'
DOCUMENT_ROOT: '/app/public'
$ gcloud app deploy

And that’s it (yes, for real).

Version control

Having a website is cool and all, but you probably want to collaborate with other people on this project and also have your sources under version control, right?

# Initialize your source control
$ git init && git add .
$ git commit -am "Initial commit"

Option 1: Google Cloud Repository only

Using only google cloud repository might be a good option for closed source or if your workflow doesn’t require pull requests, so here is how you can use it:

$ gcloud source repos create blog
$ git config --global credential.
$ git remote add google
$ git push google

Option 2: Github

We all love Github, the pull requests, the integrations with other services, why not use it? Create a Github repository and as described for google, add the remote and push your code to it.

Continuous Integration

Now that our VCS is configured we can start defining our continuous integration workflow. I’ll use a super simple one in this blog post: I want every push to build a dedicated image of the application tagged with the commit number, and launch basic tests.

Let’s create a cloudbuild.yaml file that describes the workflow and a super simple Dockerfile:

# Dockerfile
# .dockerignore

You should be able to build the image locally now using docker build. Now we want to automate that on google so that when we push some code we don’t have to build manually this container image to test it, so let’s do that by creating a cloudbuild.yaml file and calling the Google Cloud Build service.

# Build
- name: ''
args: [ 'build', '--build-arg', 'COMPOSER_FLAGS=--dev', '-t', '$PROJECT_ID/$REPO_NAME:$COMMIT_SHA', '.' ]
- name: ''
args: [ 'push', '$PROJECT_ID/$REPO_NAME:$COMMIT_SHA' ]

After that, you want to enable the Google Cloud Build integration to GitHub and push your modification to a branch to see it in action.

Add some tests

Well, launching your tests is as easy as adding the following step in your cloudbuild.yaml:

# Test
args: [ 'bin/phpunit' ]
dir: /app
- 'APP_ENV=test'
- 'WHITELIST_FUNCTIONS=shell_exec,passthru,proc_open,proc_close'

Generate some tests to try it out and see the tests executed on Cloud Builds:

$ bin/console make:functional-test IndexControllerTest

Continuous Deployment

The good news with Google App Engine is that it’s really easy to integrate with Cloud Builds. You will have to adapt the build steps a little bit and trigger the deployment from here:

# cloudbuild.yaml
# Deploy
- name: ''
args: ['beta', 'app', 'deploy', '--quiet', '--image-url', '$PROJECT_ID/$REPO_NAME:$COMMIT_SHA', '--version', '$BRANCH_NAME', '--no-promote']

Then you will have to grant permissions to Cloud Build service, so that it can perform actions on the App Engine service. This is done by giving the related service account the roles App Engine Deployer, Service Usage Viewer and App Engine Service Admin, also enabling the App Engine Admin API.

Next step: profit
South Park, Season 2 — Episode 17

Push your modifications on a new branch, wait a few minutes and you should see your branch deployed as a new version.

Logging and Error reporting

Sometimes it can get hard to debug what is actually going on in production. Usually you are given plain text Apache logs and you have to guess what’s wrong (hoping you have a stack trace here). But this is the part I love in modern cloud platforms, they have plenty of great tools built-in.

Nonetheless, you will have to configure Monolog so the default logger does not send the log in the error output nor in a file, but directly into stackdriver. Also you might want to add an ExceptionSubscriber that will catch all your fatal & uncaught exceptions ;)

# config/packages/prod/monolog.yaml
type: fingers_crossed
action_level: info
handler: nested
# regex: exclude all 404 errors from the logs
- ^/
type: service
id: Monolog\Handler\PsrHandler
# services.yaml
arguments: ['@Google\Cloud\Logging\PsrLogger']
factory: ['Google\Cloud\Logging\LoggingClient', 'psrBatchLogger']
- 'app'
namespace App\EventSubscriber;
use Google\Cloud\ErrorReporting\Bootstrap;
use Google\Cloud\Logging\PsrLogger;
use Symfony\Component\EventDispatcher\EventSubscriberInterface;
use Symfony\Component\HttpKernel\Event\GetResponseForExceptionEvent;
class ExceptionSubscriber implements EventSubscriberInterface
private $logger;
    public function __construct(PsrLogger $logger)
$this->logger = $logger;
    public function onKernelException(GetResponseForExceptionEvent $event)
$exception = $event->getException();
    public static function getSubscribedEvents()
return [
'kernel.exception' => 'onKernelException',

Now that the “hard” work is done you can enjoy a powerful logging directly in the Google Console ❤

Monitoring & Availability

At this point, without any optimization, as a Site Reliability Engineer, I’m very tempted to stress that server out to see how it handles the load. So let’s try to break it using Apache benchmark by simulating 15 users making a total of 1000 requests:

$ ab -k -c 15 -n 1000
Wait, what is this? black magic?

Ohhh right, auto-scaling: depending on the traffic, CPU usage, memory usage and more, the system automatically adds instances to serve the requests.

I must say I’m very impressed by the performance out of the box. I did no optimization whatsoever and I struggle to get a bad request.

Also note that EVERYTHING can be graphed in Stackdriver, you can even turn any log into metrics, combine them and add alerting on top of that.

This is observability heaven

A/B Testing (because why not?)

A/B testing is another feature that you benefit from as a built-in feature of App Engine. With our current setup, create a new branch with the modifications you want to test, it will get automatically deployed, then split the traffic according to the A/B testing rule you want (usually you will want IP based load balancing if it’s a visual feature, but for a demo “random” is just fine)

This is the easiest tool for A/B testing I have ever seen… You basically have nothing to do: setup some metrics to see which version is better and you are good to go.


There is so much more to cover that a book wouldn’t be enough. Embracing DevOps and becoming a Cloud engineer in a No-Ops context is taught and requires time to master, but I can assure you that it’s worth it.

I mean, look at that, in a matter of minutes (ok, maybe a few hours) you are almost “production ready”, have a working CI/CD pipeline, a fully monitored system and are even able to apply A/B testing!

I hope you enjoyed this post and that it helps newcomers in the wonderful world of Cloud Engineering. Feel free to reuse and share as much as you want and/or leave a comment!