Tekton on IBM Cloud — Level 2 Parameters

Daniel Butler
4 min readJan 24, 2020

--

Now we have our base Hello World pipeline from level 1 let's build on top of that and look into parameters. First, we will look at static parameters. Then we will look at getting them from a GitHub webhook and setting up the pipeline configuration in IBM Cloud. As always the code is in GitHub https://github.com/danielbu-ibm/Tekton-Tutorial

Parameters From An Event

When we want parameters from an event we need to look at the TriggerBinding that is attached to the EventListener. In Level 1 we already saw some parameters set in hello-trigger-binding.yaml so let's start there.

Hardcoded Parameters

This is pretty clear, we have two parameters defined repository and branch and their values are just hardcoded into the binding file. This is a good way of saving parameters that could change but not very often or parameters that will never change. If we keep all our parameters in the binding files it makes it easier for configuration management as we do not have to search through all our YAML’s to update values just the binding YAML’s.

If we modify our pipeline definition we can pass these parameters down to the task and output them to the log. Here is what we need to add starting from the trigger template.

You can see here we have added the params block which defines a list of parameters and a description of each. These parameters are available to all the resourcetemplates defined below. It is important to note these names map directly to the TriggerBinding. The value in the binding file will be inserted into the template. If we take another look at the EventListener we can see where we told Tekon to map the binding to the template.

Within the PipelineRun block, we also added parameters to be passed into the pipeline. This time we give it a name and a value. Here we are using the variable expansion syntax $() to evaluate the value inside the brackets. For $(params.repository) we tell it to access the params available and give me the value for repository, the same goes for branch.

Now we move down to the Pipeline.

This is the same same as before only this time we pass the parameters down into the task. Let's look at the Task now.

Here we can see a Task can have a list of inputs. Right now we are only going to look at the param input type. In later tutorials, we will look at some more input types. Here we define the parameter name and its type, string is the default if not provided. The only other type that is currently supported is array. This time when we use the parameters we need to use the full path inputs.params.<field>

Once your pipeline definition is updated in GitHub you can go back to your pipeline in IBM Cloud and manually trigger it again. The latest version of your pipeline should run and you will see the parameters used in the output log.

Parameters From Payload

We can do better than this though. Let's think of a pipeline flow, a common flow would be every time a pull request(PR) in Github is created or updated execute the pipeline. We do this with GitHub webhooks, and this can be set up in the pipeline configuration on IBM Cloud. The webhook has a payload with information about the PR, so we can grab the repository URL and the branch name from the payload. We will start by modifying our pipeline definition to grab the details from the event payload. Then pass those parameters down to our task, where we will echo them to the log output.

We only need to update the binding file now because we will just change instead of using the hardcoded values to use the values from the event payload.

When we trigger a pipeline from a GitHub webhook we have access to its JSON payload. This is accessed through the event object. From the event object, we use the JSON path to the value we want to use. Here we get the repositories HTML value and from the pull_request.head.ref refers to the branch the pull request came from. That is all we need to change in the pipeline definition.

Next, we need to edit the pipeline configuration in IBM Cloud the same as we did in level 1. From the pipeline configuration page go to the Triggers tab and add a new trigger. Select Git Repository, give it a name and configure it to point to your repository. Select When a pull request is opened or updated then save.

Now go back to GitHub and create a pull request against your repository. When we go into the pipelines run on the Tekton dashboard you should see a new pipeline run has been triggered from the creation of the pull request. When we look at the output log we should see the repository name and the branch name the pull request came from.

Conclusion

Now we have parameters and seen an additional method of triggering pipelines. We are getting to a place now where we can do some useful stuff.

--

--

Daniel Butler

Software Engineer for IBM. Over 10 years experience building automation solutions. “Automate All The Things!”