CI/CD Tooling with Bash: Setting Up Key Components

Avery Roswell
Tumiya
Published in
7 min readAug 17, 2019

Bash is a shell from the GNU project, and was originally created for the GNU operating system. Bash stands for ‘Bourne-Again Shell’, a reference to Stephen Bourne’s Bourne shell (sh) that he created as a new Unix shell when he was at Bell Labs.

There are several different shells, and as per the Man documentation pages:

Bash also incorporates useful features from the Korn and C shells (ksh and csh).

But what is a shell? A shell is a command language interpreter that interprets commands from the systems standard input or from a file. The commands I’m referring to are human readable instructions to get the computer to perform tasks. If we can find tasks normally executed with a shell that are considered toil we can possibly automate them by using a shell script. The shell script would simply be a sequence of shell commands in a text file.

Shells are accessed, and shell scripts are executed through a terminal emulator (terms) or a teletype machine (tty); both are software as oppose to the old school physical computer hardware terminals and tty’s from back in the day.

Terms are often, if not always available in continuous integration and delivery (CI/CD) pipelines. When compared to running other scripts like python or ruby, the shell script’s dependency requirements don’t seem to exists. I must admit this is very appealing! No concerns about language or framework updates and dependencies translates to monetary savings. But as you probably know, you have to choose the right tool for the right job. I could write and run my micro-service with bash (e.g. bashttpd), but find Golang to be a better tool in this case.

Bash Script Development

Please note I use the word script and application interchangeably. Consider a bash script that connects to a build platform service. The main purpose of the application is to start a build on the build platform and monitor that build, delivering the logs back to the user in real-time or when the build is complete. An understandable use case would be connecting to a third party build service for running iOS builds. Bitrise, MacStadium, and Buddybuild would be good examples of companies offering this type of service.

With a case of Bitrise, the initial steps to developing a bash script triggering builds on Bitrise would be:

  1. Examine the Bitrise API documentation. See https://api-docs.bitrise.io/.
  2. End-point verification parameters and responses via a curl command or Postman
  3. Create mock JSON response file to be read by the bash script during development (i.e. testing)
  4. Parsing the JSON response
  5. Handling application secrets

Bitrise API documentation

A quick review of the Bitrise swagger documentation shows the end-point to trigger a new build.

End-point to trigger a new build on Bitrise

According to the documentation, the app slug is required as well as the build parameters:

{
"build_params": {
"branch": "string",
"branch_dest": "string",
"branch_dest_repo_owner": "string",
"branch_repo_owner": "string",
"build_request_slug": "string",
"commit_hash": "string",
"commit_message": "string",
"commit_paths": [
{
"added": [
"string"
],
"modified": [
"string"
],
"removed": [
"string"
]
}
],
"diff_url": "string",
"environments": [
{
"is_expand": true,
"mapped_to": "string",
"value": "string"
}
],
"pull_request_author": "string",
"pull_request_head_branch": "string",
"pull_request_id": {},
"pull_request_merge_branch": "string",
"pull_request_repository_url": "string",
"skip_git_status_report": true,
"tag": "string",
"workflow_id": "string"
},
"hook_info": {
"type": "bitrise"
}
}

The app slug is provided by the build platform service. It is the part of the URL that is used to find the particular resource you are requesting or trying to access.

End-point Verification

From the Bitrise swagger documentation, what is required for the trigger build end-point is providing both the application slug (app-slug) and the “build_params”. However in my verification step, as shown below, the call fails when the “hook_info” is absent from the request body.

Postman

I used Postman for my verification. It’s a well proven tool for build and testing APIs. With Postman I’m able to capture the API response and compare it to the API documentation.

Without the “hook_info” in the request body, as such:

I receive a 400 response:

Immediately I noticed a discrepancy between the keys given in the JSON response in the documentation versus the keys used in the actual response captured in Postman. The documentation gives “message” as the key and there’s no mention of a “status” field. It always helps to go straight to the horse’s mouth. In this case hitting the live API as opposed to just settling for the documentation.

Mock JSON Response

It would have been an awful experience creating a mock JSON response solely based on the Bitrise API documentation. The use of mocks in general is the key to a smoother development cycle. One that doesn’t include hit the live API service every time we run our script under development or test scenarios. Without the mock you would have several zombie builds that would have to be aborted. There’s also a cost implication to hitting live services.

Here’s my mock response from triggering a build:

{  “status”: “ok”,
“message”: “webhook processed”,
“slug”: “fake_app_slug”,
“service”: “bitrise”,
“build_slug”: “fake_build_slug_200”,
“build_number”: 200,
“build_url”: “https://app.bitrise.io/build/fake-build-slug",
“triggered_workflow”: “unit_testing”
}

I’ve replaced real values with dummy ones in the key value pairs. Follow this approach of reading the API documentation, verifying the end-point using a tool like Postman, and then creating the mock JSON response you can easily build mock response for specific scenarios.

With my bash application I can read this mock response:

# file path to mocks
TRIGGER_200_RESPONSE="./mocks/trigger_response_200.json"
BUILD_STATUS_ABORTED="./mocks/build_status_aborted.json"
if [ $TESTING_MODE ]; then
result=$(<"$TRIGGER_200_RESPONSE")
else
result=$(eval "$trigger_command")
fi

As shown above, in testing mode (same as development mode in this context), the mock JSON response can be read into the variable named result. Now the focus shifts to parsing the JSON response.

Note how easy it is to read in file content, i.e.

result=$(<"$TRIGGER_200_RESPONSE")

Parsing the JSON Response

The usefulness of the script depends on our ability to parse the response received from the build service. jq is a fantastic application for working with JSON data. It truly is lightweight “written in portable C with zero runtime dependencies”.

For example, consider the mock response for the trigger end-point, shown above. To extract the “build_url”:

build_url=$(echo "$result" | jq '.build_url' | sed 's/"//g')

The end-point response is stored in the result variable which printed and piped (|) to the jq executable. Using ‘.build_url’ returns the URL in quotations. To remove the quotations I use the bash sed command.

What I’ve been discussing thus far can represent my application’s network layer. It’s the core of the application and without it the application becomes meaningless. Many web applications rely on such layers and are often designed to have this layer as a module that can be swapped out in favour of another way of implementing the same functionality, if desired.

Handling Secrets

What wasn’t mentioned earlier was the header required to trigger the build service. The http request needs to carry an access token and supplied it to the Bitrise API. This however begs the question of how I should store secrets in my bash script. I shouldn’t!

Normally, data is feed into shell scripts via the command line arguments. However an app slug and an access token are no regular arguments. Using this approach in a CI/CD pipeline would mean storing the sensitive data with the CI build server’s secrets. CI applications like Bamboo and Gitlab give users the ability to enter secrets (usually via their web GUI). When a test automation build pipeline runs, the CI application can provide access to the secrets by setting environment variable at runtime in a specific job’s shell session.

Instead of having a secret variable for every secret that my application needs to consume, I prefer to have one secret variable in the form of a JSON or Yaml file. For example, in JSON form:

{
"theAccessToken": "secret-access-token",
"token": "secret-app-token",
"slug": "app-slug"
}

would be my application secret. This secret could be read into the script from an environment variable that is set at runtime on the CI build machine. I have a tendency to first write the secret environment variable into a temporary file (config.json) and then have my script read and parse the JSON from the file. So the actual command line argument passed would be the path to the file. But why this way?

This matches my development process. In my project folder is a config.json file with development level secrets (as oppose to production level). Also, my config.json is never committed to any code repository, but instead git ignored. The path to the file is what is given to my script via a command line argument:

./ci-integration -c ./path-to/config.json

For example:

# reading and parsing config file
CONFIG_FILE_CONTENTS=$(<"$CONFIG_PATH")
ACCESS_TOKEN=$(echo "$CONFIG_FILE_CONTENTS" | jq '.theAccessToken' | tr -d '"')
SLUG=$(echo "$CONFIG_FILE_CONTENTS" | jq '.slug' | tr -d '"')

where $CONFIG_PATH is the file path to the config.json. The file contents are echoed to our parser, jq, and the desired field is extracted. Using the translate characters bash command, tr, the quotation marks are stripped away from the returned value. You can type “man tr” in your terminal for more information on the tr command. In fact you can use the man command for details on all bash commands.

Conclusion

Although I haven’t discussed bash commands and bash syntax at length, what I have emphasized up until this point is considering key components that will allow for smooth development iterations. When designing an application bear in mind that code is read and maintained more than it is written. In other words think of how other developers will add and maintain the code base. Having a straightforward way of reading in critical application data pertaining to configuration and mock JSON responses makes a world of difference in improving the developer experience. Now we can dive deeper into the development of the bash script. For now I’ve tagged the overall project setup with a few more bits.

--

--