A Sprint In a Life of Devops at Yuk Recycle: Part 1

Azka Ali
PPL A-4 YUK RECYCLE
9 min readFeb 28, 2019

Hi! My name is R. Azka Ali Fazagani (or just call me Azka) and the project class I’m taking requires me to write a blog about my project, so here it is :D.

I know there are tons of other informative and relevant picture that I can put here. But for now, here’s a picture of a cat

Chapter I — Prologue: An Awsome Background Story

The project consists of six students as the developer, a scrum master and a project owner, in this case it’s Gojek, and our product will be called Yuk Recycle.

Yuk Recycle’s will have one Golang backend with a PostgreSQL database, along with two Flutter mobile apps. Golang is awsome for its great performance and we also wanted to try something new, PostgreSQL is so far what we know as the best object-relational db, and Flutter has multi platform support and can deliver code faster because it’s ready-to-use material design widget (which we may or may not use in our development. Who knows?)

Image result for monorepo

All of these interconnected component will live happily together inside a monorepo because we were given just one privately hosted Gitlab service by our beloved campus. This monorepo strategy can has it’s own good and bad.

The good is we will only have to maintain one single repository instead of multiple, so we can just clone one and get every thing. The bad is when you put all your stuff in one single box, it often get messy. It can be fixed of course by being careful and not just putting anything anywhere, but still I would prefer using multirepo when you have multiple mobile apps and possibly a frontend in the future. There’s also one small problem with this monorepo strategy.

When you put all your stuff in one single git repo, then everything will be tracked. If this project will ever gets big, then there’ll be more people pulling this repo to their own machine. If they pull the whole repository, then it might take some time and some big hard disk space because it has all history track of every single project it contains. In some cases it get so big that the engineers of a company didn’t pull the git repository. Instead they ssh’d to a private vps and workout their code there using vim.

But of course those bad things will only happens when the project gets big, so if it’s just the first 3 months of the project like this case. I don’t think it’ll be a problem in here :D.

Chapter II — The One That Starts It All: Project Structure Planning

What do we do when we want to start a project? That’s right! We get some good night sleep on our comfy bed on a peaceful night… except it was not peaceful… and there was no comfy bed… and it was 4 pm. That’s what I do on the first day after our team’s first Sprint Planning. Then after that, I did what I’m supposed to do.

Because of the monorepo strategy we’re forced to use, I want to make sure that the files and folders is structured well so we won’t have any problem in the next six incoming sprint. With that in mind, here’s what we came up with.

root_project
├── backend
│ ├── app
│ ├── controllers
│ ├── models
│ ├── repositories
│ ├── services
│ └── utils
├── mobile
│ ├── assets
│ ├── common
│ ├── customer_app
│ └── mitra_app
└── scripts

There’ll be three main folders — backend, mobile, and scripts. backend folder will consists of our Golang backend with Postgres already integrated in it’s migration system while mobile folder will has four folders:

  • assets
    This folder contains all necessessary assets for our mitra and customer app.
  • common
    This folder supposed to act as a shared library between our apps.
  • customer_apphttps://gist.github.com/rust20/853d86dbfa818c08f842bac9c964d336
    As its name suggest, this is a project folder for our customer app.
  • mitra_app
    Similar to customer_app , but it’s for our mitra app.

The time it takes to come up with this structure isn’t as quick as what we want, so this project structure will hopefully satisfy our needs for all the time it costs us.

Chapter III— An Never Ending Battle: Setting Up Gitlab CI/CD Pipeline

A picture of me frustating at gitlab ci/cd set up

Gitlab has this neat feature where you can setup a CI pipelining for your project given it has a runner setup. Fortunately, the Gitlab service we’re using has this set up already. So I thought I won’t have any trouble configuring the .gitlab-ci.yml file required to uses this feature right? Well boy I was wrong.

Well yes, I was right at first. It all came smoothly. Their documentation was very clear to me and easy to follow. Not to mention there is also tons of reference I could use (and so I did) and also we can setup a local Gitlab runner for testing purposes.

The setup was clear and straight forward if it’s not for my buggy docker installation in the past. After all the local runner setup, hours of reading reference and documentation, and also screaming in pain, here’s what I came up with.

General structure

Gitlab CI uses a config file with yaml format. They consists of arbitrary number of jobs with constraints stating when they should be run. They usually looked something like this:

job_name:
image: alpine
script:
- echo hello

The job_name part is the name of the job that’s going to be running doing our CI or CD stuff. It can be anything except some of the reserved word like script and image. image is tag of an image from docker registry that’s going to be used as base image when running the job, and it’s optional. The only required clause is script which contains array of shell script which is executed by the runner. The rest can be looked at https://docs.gitlab.com/ee/ci/yaml/

Build stages

Our pipeline will consists of two stages:

stages:
- test
- lint
- deploy

The test stage will test all of our test all three services source codes, backend and two mobile apps, along with coverage report and will also lint our code ( which I haven’t done because it’s a lower priority task). The deploy stage supposed to build docker images from this project and push it to our privately hosted docker registry.

The test stage

Our stack, Golang and Flutter, already has their own testing system. The execution of the testing is also so simple. go test for Golang and flutter test for Flutter. Flutter is not a problem because it already has test example from the init project, so I can fiddling around and find what’s best and create a CI job for flutter test. The only problem is I don’t have anything to test with Golang. So then I need to wait for my team mate to push their work first so I can setup the test job. So there goes another 2 days until I got it on my hand and start working on it. Here’s what the jobs looks like at the time.

Here I created one mobile test job template at line 35 and “extends” it to two other jobs. One for our customer app, and the other for mitra app. To do this, I used one of yaml’s advanced feature, which helps me a lot so I could just design and write one job and use it for two.

Another thing that can be added to this test jobs is coverage. Thankfully gitlab CI also supports capturing coverage from command line output from its runner. All I need to do is add a key to my gitlab CI config called coverage which will capture coverage from matching string of it’s value. Mine look like this for flutter coverage.

coverage: '/total:\s+(statements)\s+\d+\.\d+\%/'

And when it added to its bunch, it looks like this:

.mobile:unit-test: &unit-test-mobile
image: rust20/flutter-build:latest
coverage: '/total:\s+(statements)\s+\d+\.\d+\%/'
stage: test
variables:
APP: ""
script:
- cd mobile/$APP
- flutter doctor -v
- flutter test --coverage
- lcov --summary coverage/lcov.info
- genhtml coverage/lcov.info --output=coverage

We can also analyze the details of our app’s coverage by using a artifact clause.

mitra:unit-test:
<<: *unit-test-mobile
variables:
APP: "mitra_app"
artifacts:
paths:
- mobile/mitra_app/coverage/

It can be used as temporary persistent storage so you can pass files between jobs and can also be downloaded. Here, it used as temporary storage to store coverage report, so you can download the html report as the result from genhtml command, so you can look at the detailed report. We also did it to test job for golang.

It may not be seemed to be useful to download the coverage from our gitlab jobs when we can do it locally, but it helped us to realize that flutter v1.2.1 has this weird bug where it’s virtually impossible to reach 100% coverage, which kinda helped us.

Linting Stage

The linting stage is so simple. Just lint the code! At the moment we only lint the Go code, but the flutter implementation shouldn’t be so different.

backend:lint: 
image: golang
stage: lint
dependencies:
- backend:test
cache:
key: ${CI_COMMIT_REF_SLUG}
script:
- cd backend
- go get golang.org/x/lint/golint
- golint ./...

Deployment Stage

The deployment stage for our Go backend will consist of building the image and push it to privately hosted docker registry, so the implementation is pretty straight forward. The backend deployment job looks like this:

.deploy:backend: &deploy_backend_template
image: gitlab/dind:latest
stage: deploy
tags:
- docker
dependencies:
- backend:test
cache:
key: ${CI_COMMIT_REF_SLUG}
before_script:
- mkdir -p /go/src
- ln -s /go/src/ backend/src
- cd backend
- docker-compose build
backend:deploy-staging:
<<: *deploy_backend_template
variables:
VER: "staging"
script:
- docker tag $DOCKER_REGISTRY/$API_IMAGE:latest $DOCKER_REGISTRY/$API_IMAGE:staging
- docker push $DOCKER_REGISTRY/$API_IMAGE:staging
only:
changes:
- backend/**/*
- .gitlab-ci.yml
refs:
- staging

I uses docker-compose so it’ll put me at ease and just manage the docker file which will be discussed at later time.

There’s actually another job for dev deployment, which deploys our code from our dev branch. But all I need to do is subtitute the word staging with dev and we’re done.

As for our flutter deployment, I only build it and save the released app as an artifact, so we could download it from the gitlab page. Here’s the implementation:

.mobile:build: &mobile-build
image: rust20/flutter-build:latest
stage: deploy
script:
- cd mobile/$APP
- flutter build apk
artifacts:
- build/app/outputs/apk/release/app-release.apk
mitra:build:
<<: *mobile-build
dependencies:
- mitra:unit-test
variables:
APP: "mitra_app"
artifacts:
paths:
- mobile/mitra_app/build/app/outputs/apk/release/app-release.apk
only:
refs:
- staging
- development
changes:
- mobile/mitra_app/**/*
- .gitlab-ci.yml

Similar to our backend conterpart, there’s also another deployment job for customer app, but again, all I need to do is subtitute mitra to customer and it worked like a charm.

Optimization

For every iteration of job, it will fetch new images, new dependencies. If so, there will be tons of redundant job. Not to mention repeated statement everywhere. There must be a solution for this problem of ours right?

For the repeated statement, it can be solved using extends clause. But when implementing this, they gave me a whole lot of unnecessary trouble. Apparently this feature is not supported yet when tested using gitlab-runner exec command, but will still work wonderfully when running using fully operating gitlab service.

But there’s another implementation, which I already used, using yaml’s anchor and references. It worked the same as extends clause, which actually is a gitlab’s buggy-implementation of yaml’s anchor and refrence, but with out the bug.

As for redundant job and repeated image fetching, I got some info from a friend where it can be cached, but not yet working because of outdated version of docker we’re using. This might be fixed in the future. For the repeated dependencies fetching, gitlab ci also has a feature to cache some folders, but this caching only works in one single pipeline from one job to another. It doesn’t work when you want to cache it from one pipeline to the next one.

So anyway, here’s the complete version

There’s a lot of improvement can be made in this gitlab CI setup. But there’s still time. So I can only hope that I don’t messed anything more and can work faster to catching up with everyone else.

--

--