Postman and CI

Asheesh Misra
19 min readFeb 24, 2019

--

Starting…

So the postman collection is ready and newman has executed it perfectly. However, you need to run the collection manually (via newman) every time some changes are introduced in the API under test. Also, as the API under test evolves, your collection (i.e. the request grouped under it) would also evolve. As you update your Postman collection, you may want to keep older versions also so that if needed you could roll back to them. If this is done manually, not only would this be tedious but error prone as well. Also, what if more than one members on your team are working on the same collection (or worse on the same request)? Here comes in the requirement of version control. Another thing is that you would want your updated Postman collection to run on the API under test whenever you (or your team)make changes to it. This would again be a manual effort. Want to save yourself from these continuous and recurring troubles? Let’s git some version control and CI in action!

Why version control first?

We would want to do version control for multiple reasons ,however, prevention of loss of work, collaboration and review, supporting parallel work streams are the most important ones.

Consider more than one members on a team are writing Postman tests for an API. To keep their work at one place, simplest way is to use the ‘workspaces’. Multiple users can work on different requests under the same collection, adding tests for the API being tested. With all tests at one place or single repository, all of them can be executed via CI tool as and when needed (say, at some scheduled time periodically, in an automated manner, or manually, at click of a button). We will see how, later in this tutorial.

Where does CI tools come into picture?

The primary reason why software companies want to do Continuous Integration and/or Deployment is to ‘save time and money’. CI tools help to increase speed and consistency of deployments, by automating some or all steps of building, deploying, and testing software. From the ‘testing’ aspect, we are considering unit testing, functional testing, acceptance testing and load testing.

Jenkins

It is a very popular free CI tool. We may understand it as an orchestration system that has the potential of automating all those aspects of software delivery, as mentioned above.

Jenkins itself is a java based web application which can run your desktop or on a server. We can download Jenkins from its site.

Jenkins Download Screen

Installation of Jenkins is pretty simple and for our purpose, selecting the default options should suffice.

Jenkins Installation — Setup wizard screen
Jenkins Installation — Destination Folder Selection Screen
Jenkins Installation — Customize Jenkins Screen
Jenkins Installation — Plugin Installation Screen
Jenkins Installation — Admin User Creation Screen

Once the installation is complete, it starts running on the port 8080 for our system. We can load the URL http://localhost:8080 to see Jenkins’ dashboard (we might be required to login though, using the credentials we specified while installing it).

Jenkins Dashboard

Now, that we are on Jenkins dashboard, we can create & configure jobs which perform build & deployment automation. At a high level of abstraction, the jobs created in Jenkins would have three major phases:

  • Setup Phase: Jenkins reaches out to a VCS (Version Control System) like Git to fetch the codebase (the application’s code (and tests)).
  • Build Phase: We can configure multiple ‘build’ steps which are then executed in a sequence. These ‘build’ steps may include creating the build of application, doing static analysis, running unit tests, deploying the build, and then performing other tests (functional, acceptance and load), if any.
  • Post Build Phase: Jenkins collects results from tests and sends out notifications (via email or SMS or any medium) to the stakeholders.
Typical Job on Jenkins

What Jenkins is capable of in a full blown CI/CD orchestration can be visualized as below:

Source: http://www.slideshare.net/asotobu/jenkins-20-65705621

Duh! My project does not have CI/ CD

Now, not every project has a CI/ CD pipeline in place. Also, it takes considerable time and technical skill. So, if our project (or company) is not quite ready for something like this, we can setup our own Jenkins server such that it can be used to run our API Tests.

This would basically happen in following steps:

  1. Create a Collection of requests (in Postman).
  2. Create a node project which stores Postman collection.
  3. Add newman dependency to the project.
  4. Create a git repository and push the node project’s folder to git.
  5. Use Jenkins pipeline’s DSL to run the Postman collection
  6. Configure reporters on newman, to show results.

Ok, let’s get going.

Create Collection in Postman

We used Imgur API to create a collection and run it in Newman in a two part tutorial earlier, and would use the same collection here. Honestly, the tests, in the context of this tutorial do not matter. The purpose here is introduce version control for our tests and run them in an automated manner.

Create a node project

We can verify if we have node installed on our system or not by running the following command at command prompt (on Windows) or terminal (in Linux) and if it is installed, we will get response similar to below:

C:\>node -v

V10.15.0

Similarly, we can check for ‘npm’ as well:

C:\>npm -v

6.4.1

Now that we have confirmed that both node and npm are installed on our system, let’s navigate to the folder where postman’s collection is present, via command prompt. On my system, the collection is present at the following location:

C:\Users\asheeshmisra\Documents\postman-CI-Demo

There is no other file in the folder except the collection file (.json). Let’s initialize a new node project in this folder:

C:\Users\asheeshmisra\Documents\postman-CI-Demo>npm init

We should be presented with a couple of options to select so that our package.json file can be created. For this time, we will keep the default options. We will update the package.json to match our requirements, later.

Add newman dependency to the project.

Ok, now that the ‘package.json’ file has been created, let’s add ‘Newman’ as a dependency to it, by using

>npm.install newman — save

We can confirm that the ‘Newman’ dependency got added to our package.json by opening it any text editor (I am using Visual Studio Code and would strongly recommend it).

Since the ‘package.json’ file is open in VS Code, let’s edit it to suit our purpose.

We won’t need “main” key-value pair, so let’s remove it. We may want to remove “license” as well. Also, we can edit “test” under ‘scripts” to specify the newman command to run our collection. To make it more informative, we will change “test” to “api-tests”. So, after doing these two changes, our ‘package.json’ should look like this

Next, we may remove the ‘node_modules’ folder from our directory where have created our node project. Now, if we see the contents of our folder, we have

  • Package.json : contains information about how to run our test(s) and any related dependencies for their execution.
  • Postman Collection: contains the API tests we wrote in Postman and exported as Collection.

Create a git repository and push the node project’s folder to git

Next, let’s initialize a git repository for our project (git must be installed)

>git init

If we now see the status of our repository

>git status

Let’s add all the files to the repository and then check the status once again.

>git add — all

>git status

The files are now a part of the repository, therefore, let’s commit our changes in git.

> git commit -m “Initial Commit”

Now, let’s take things one step further, create a repository on Github and ‘push’ our git repo’s contents over there. So, we navigate to Github, login and create a new private repository with same name as our project.

NOTE: Microsoft acquired Github a few months ago and now, users can create private repositories also, for free. This was a paid service, before the acquisition.

Github — New Repository Creation Screen

When we clicked ‘Create Repository’ button, we landed on the screen above, which gives us the information about how to move forward towards syncing local data and remote data. As we have already created a git repository, locally, we will use the git remote command to push our repo’s contents to the github, so on the command prompt let’s enter the following command

> git remote add origin https://github.com/asheeshmisra/postman-CI-demo.git

And then

> git push -u origin master

If we now refresh our github repository, we would see the contents that we pushed

Now that we have the collection in place, let’s move to Jenkins, our CI tool

Use Jenkins pipeline’s DSL to run the Postman collection

Let’s quickly navigate to Jenkins instance and create a new Job by clicking the ‘create new jobs’ hyperlink. To create a new job, Jenkins gives a couple of ways to initiate. For our purpose we will specify our job’s name, select the ‘Pipeline’ type and click OK button.

On the following screen, we need to configure our job. We will leave the things on the ‘General’ section as is and scroll down to the ‘Pipeline’ section.

We will select ‘Pipeline Script’ option from the ‘Definition’ dropdown and will then write a simple script in the ‘Script’ section to run our collection. To do that we can select ‘Hello World’ from the dropdown in the ‘Script’ section.

Now, we have a template which we will modify. We can add ‘stages’ to the script to segregate multiple jobs that we would like to incorporate in our script, so we will do that. Then, first thing that we want to do in our script is fetch the code from github repository. Second, we want Jenkins to install all the dependencies required for the code to run. Third, we would specify the command to run our collection. Note that the command would NOT be the newman command itself rather the key for it, that we specified in our package.json. After all these modifications, out script should look like this

Let’s ‘Save’ changes. After we do so, we get navigated back to the ‘Dashboard’. Since while creating our job we did not specify any triggers, we would have to build our job manually.

Ok, let’s click on ‘Build Now’ to start our job.

Ohh No! The build failed. :(

Let’s try to find out why. If we hover on red box which shows failed, we should see a popup displaying the reason for failure and a button to see Logs. Let’s click on the button to see error that was logged.

So… the problem is pretty obvious, we made our github repository ‘private’ and didn’t tell Jenkins the username and password to get it.

This problem can be sorted out in two ways, either we make the github repository ‘public’ OR edit our Jenkins pipeline so that Jenkins can access our github repo. Let’s first try with making repository ‘public’ first and then we will do pipeline update as well.

To make github repository ‘public’ click on the ‘Settings’ tab on the repository’s page on Github, scroll down to the ‘Danger Zone’ section and then click on the ‘Make Public’ button; we might have to specify our github account password again to authenticate the change we are making.

Now, that we have our repository ‘public’, let’s quickly move to Jenkins and click on ‘Build Now’ link for our pipeline job. The build errored out again, phew…., now what…

The Jenkins error popup doesn’t help much (actually it does, but we will work that out later in the tutorial). However, it does show that the shell script errored out. Let’s navigate to the path shown in the error and investigate.

At the path on our local machine, as shown in the error message, we found our git repository’s contents. So one thing is positive here that Jenkins was able to fetch our git repository now (after we made it ‘public’). Upon opening the ‘package.json’, it was realized that we committed a typo by specifying ‘newman run’ twice. Let’s quickly edit ‘package.json’ in our local repository to rectify this mistake. After we have done the change our ‘package.json’ should be similar to the screenshot below

Now, we need to commit our changes so that our github repository is also updated. Let’s do that

>git status

This command will confirm all the files which have changed and need to be pushed to the github repository.

Next step is to add, and then commit changes to from our project folder into our local git repository

>git add .

>git commit -m “Package.json typo corrected”

Now that we have committed our changes to our local git repository, we can ‘push’ our changes to github repository, however, suppose if some one else was also working on the same file and pushed his changes, then upon our ‘push’ his changes would be erased, since our local copy of that file and remote copy of that file are not in sync. To prevent this it is always a good practice to first take a ‘pull’ from the remote github repository on to our local git repository, make the merges, remove conflicts (if any) and then ‘push’ to the remote github repository. So, although in our this case, I am the only user, yet, to cement the process in our mind, let’s first take ‘pull’ and then we will ‘push’.

We can confirm that our changes indeed made to the github repository by opening the package.json over there and verifying the command.

One more change that we would like to make is to update our Jenkins script to be more modular. We will keep each step of the script in stages. Also, please note that since I am using Windows 10, therefore, to execute Jenkins script i would need to use ‘bat’ instead of ‘sh’. If it is absolutely necessary to use ‘sh’ only, a couple of solutions may be read here, here and here.

So, we have updated the Jenkins script to look like this:

Now, let’s re-run the Jenkins job, by clicking the ‘Build Now’ hyperlink, on Jenkins.

And… we encounter error again. Jinx of ‘build failure’ is not breaking…

Upon carefully reading the ‘Shell Script’ error we realize that the file that the postman collection is supposed to upload in on our local computer and not in the remote github repository and that’s why Jenkins cannot find the specified file.

To solve this problem, let’s copy and paste the same image file that we used in our Postman collection (we may use any image file but give it the name that we used in our upload request in Postman collection). After doing so, let’s quickly add, commit and push the file.

Let’s quickly confirm that the image file has made to the remote github repository

Two more changes that we need to do is to update the image file path in the Postman collection and update collection’s file name by removing all of the white spaces in it (changing collection’s name would mean that we will need to update package.json also). Let’s do all this stuff directly in our github repository. We will navigate to our ‘postman-CI-demo’ repository on Github, click on the Postman collection JSON file. We will see that the file gets opened in a ‘view only’ mode. To open it in ‘edit’ mode we need to click on the ‘pencil’ button at the top right corner of the file. First we will update collection’s name and copy it. Then, we will keep only image’s name (filename.extensionname) as the ‘src’ value and scroll down to ‘Commit’ section. We shall specify an appropriate ‘commit’ message (although it is optional) and click on the ‘Commit’ button.

Similarly, we will edit ‘Package.json’ file from Github repository and paste the updated name of Postman collection in place of the existing one.

Now, that the image file is present in the remote github repository and its path has been updated in the Postman collection, let’s re-run Jenkins job once again. And…. the build is successful. We have broken the jinx of ‘build failure’, at last, yay….

Let us now look at how to do the same stuff when github repository is ‘private’. As shared above, we will have to update our Jenkins pipeline to accomplish this. We will start with creating a new file in our Github repository and naming it as ‘Jenkinsfile’. We will paste the Jenkins script in this file.

So, on github, we will navigate to our repository, click on ‘Create new file’ button to create a new file and specify ‘Jenkinsfile’ as its name. Please note that the filename is case sensitive.

Now, let’s move to our pipeline job on Jenkins and click on ‘Configure’ hyperlink. On the ‘configure’ screen, in the ‘Pipeline’ section, click on the ‘Pipeline’ dropdown and select ‘Pipeline from SCM’ instead of ‘Pipeline Script’ and then from the ‘SCM’ dropdown, select ‘Git’.

We see that as soon as we select ‘Git’, a couple of new fields show up, like ‘Repository URL’, credentials, etc. Let’s copy the github repository URL from our Github account. To do so, on our repository’s page, click on ‘Clone or Download’ button and then copy the URL thus displayed in the popup.

Let’s navigate back to Jenkins and paste the URL in the ‘Repository URL’ textbox under ‘Pipeline’ section.

To specify the credentials we will click on ‘Add’ button and then on ‘Jenkins’ to load the ‘Jenkins Credentials Provider’ modal popup. After specifying our github credentials we will click ‘Add’ button.

Jenkins unloads the credentials provider modal popup and back on the ‘configure’ screen for our pipeline job, we see that the ‘Credentials’ dropdown has the same credentials as we specified. Let’s select them and ‘Save’ changes.

Before moving any further, recall that out Github repository is still ‘public’ so we will make is ‘private’ now. The process is exactly the same we followed above while making it ‘public’.

Now, let’s re-run our build by clicking the ‘Build Now’ hyperlink and…Yay!! we got this right in first attempt, awesome!

We can see the console output for our build by clicking on the permalink for our build which is visible just below the ‘Stage View’ and then clicking on the ‘console output’ hyperlink. We would see the tests we had created in Postman and their output here.

So, we are done with the creation of Jenkins pipeline which has our postman collection and can execute whenever we want. Let’s tweak things further by adding reporters to newman.

Configure reporters on newman, to show results

Official documentation for newman offers comprehensive information upon the reporters that can be used with it. We will use the ‘ — cli’ reporter for our purpose (and will build on other reporters in a separate tutorial). We will need to update our package.json file to add the reporter.

Let’s do it locally and push changes to github repository. We will follow the same process as earlier, i.e. make changes locally and commit to local git repository, pull from remote github repository, merge change and resolve conflicts (if any) and then push back to remote github repository. So, let’s open our package.json in VS Code and update the newman run command. Using the integrated terminal of VS Code let’s commit our changes to local git repository and then take pull from remote github repository. Upon doing so we see there is a conflict in ‘package.json’ file; remember we updated the package.json file directly in the Github repository (changed the name of collection).

VS Code shows us the change we did locally as ‘current change’ and the one done on remote github repository as ‘incoming change’. It also shows the action that we can take like, ‘Accept Current Change’, ‘Accept Incoming Change’. We want to keep both changes for this particular instance (because we want the updated name of the collection and the addition of reporter both) so when we click on ‘Accept Both Changes’, VS Code keeps both lines. We didn’t want that and therefore we will copy the updated name of Postman collection, change it in the line with reporter, delete the line without reporter and save changes. For this change to be reflected on github all we need now is to push.

Before we do that let’s consider one more thing. Back on Jenkins, the job we created has to be initiated ‘manually’ (remember clicking on the ‘Build Now’ hyperlink. This is so because we did not add any trigger to the job. We can update our pipeline to trigger a build whenever there is a change to our repository on Github. To do so navigate to System Dashboard of Jenkins and then click on ‘Manage Jenkins’ hyperlink.

On the ‘Manage Jenkins’ UI, click on ‘Configure System’. On the ‘Configure’ screen, scroll down to ‘Github’ and then click on ‘Advanced’ and then click on the checkbox ‘Specify another hook URL for GitHub configuration’. Copy the URL that shows up and uncheck the checkbox.

Now we will move to our Github repository and click on the ‘Settings’ tab. Then under the ‘Options’ we will click on ‘Webhooks’. As the ‘Webhooks’ UI itself reads, they allow external services to be notified when certain events happen. When the specified events happen, Github sends a POST request to each of the URLs we provide.

Let’s click on ‘Add Webhook’ button to add a new Webhook. Github will ask us our credentials. Upon specifying correct password, we should be allowed to progress.

In the ‘payload URL’ textbox, paste the URL we copied on Jenkins. Leave the other things as is. Notice that by default, the event that would trigger this webhook is ‘push’ event. So, whenever we make a push to out remote github repository, this webhook will send a POST request to the payload URL which is actually our Jenkins URL.

Now to make Jenkins trigger new build we need to make one change to our Job. So navigate to the Job on Jenkins and then click on the ‘Configure’ hyperlink. Then under the ‘General’ tab check ‘Github Project’ option and specify the URL of our github repository. Also, check the ‘GitHub hook trigger for GITSCM polling’ checkbox, under the Build Triggers section.

Now, if everything that we have setup (github hooks and settings in Job) is correct, then on a ‘push’ to our github repository, Jenkins’ job should run. Let’s try that………….we waited patiently but nothing happened.

After googling for over an hour, we came to know that the payload URL (that we specified in our webhook on Github) has ‘localhost’ and that does not get exposed to world. We also found that a simple solution to this prolbem is to expose ‘localhost:8080’ through some tunnel. One way to do this is using ngrok. Let’s download ngrok for windows from here and extract its .exe.

Now, let’s double click on the ngrok executable and then on its command prompt type in following command and press enter

>ngrok.exe http localhost:8080

In the above command we have provided three parameters, protocol, URL and port.

Ngrok provides a random URL corresponding to these parameters. Copy that URL and replace the “http://localhost:8080” part of the payload URL we specified in the webhook on Github and click ‘Update’ button.

Now, let’s make some changes to our code locally and ‘push’ again. And…. this time Jenkins ran our build, finally. High Five!!

Concluding….

Firstly, i would apologize for making this long long long tutorial. However, with the time and energy spent in writing this, I hope it serves the purpose.

We have come a long way in this article, from Postman collection to git to github to Jenkins and then integration between github and jenkins. We also saw how continuous integration helps us run our tests on the latest build as soon as new code is pushed to repository. We can configure other triggers also, in Jenkins. Automatic build execution saves a lot of time.

Food for thought…

We did see a working example of CI with Postman, Github and Jenkins. However, there is a ‘big’ limitation here. Ngrok helped in exposing our local Jenkins instance running at localhost, to the outer world (our remote Github repository), by creating a tunnel for ‘localhost:8080’ via a randomly generated URL it provided. If for any reason ngrok executable (which is still running) closed or restarted, our local Jenkins would again become unreachable from outer world, because now that randoml URL would be lost and we would need to generate a new URL again from it and save it in our Github’s webhook. This is a problem that needs to be addressed and we will do it in a future tutorial. Until then keep learning, keep testing.

If you have any suggestions on this tutorial or for me in general, please do leave in comments. I will surely try to improve. Cheers!

--

--

Asheesh Misra

ISTQB Certified Tester @Metacube, Jaipur| does Automation, Performance, API Testing| listens spiritual discourses and good music| continuous learner