Expedia Group Technology — Software

End-to-End Testing Without a UI

Finding appropriate tools for end-to-end API testing

Computer screen showing the VSCode developer tool
Photo by Mohammad Rahmani on Unsplash

Coming from a customer facing team that was responsible for over a dozen different applications, our team relied heavily on end-to-end testing to ensure that everything hung together correctly before automatically deploying our changes to production. We used a combination of Java (JUnit), Selenium and BrowserStack to simulate a real user navigating through our applications in the test environment.

Recently the team moved to a domain that didn’t have any user interfaces (at least not yet) and our application is essentially an orchestration layer to a multitude of downstream systems. To make matters more complicated, the downstreams are in a constant state of flux so assumptions are quickly out of date. The question we asked ourselves was: How do we get a similar level of confidence, without spending a lot of time and energy on solving this problem?

Terminology definitions

Before I go any further, there are many interpretations of the terminology used here. I don’t want to derail the topic about which is “right” or “wrong”, but just for a common understanding these are the definitions I use.

Unit Test: Lowest level of testing, usually a single class. Required classes can be mocked or real depending on your personal preference. Except if the class is communicating to a resource (such as a filesystem, HTTP endpoint or a database etc) you would typically use a mock to abstract some of the complexity away.

Integration Test: Tests multiple classes together, however usually resources are stubbed, e.g. WireMock for HTTP calls, in-memory databases etc. Generally a good Integration Test is testing the app running in a production-like way, but all the dependencies are in your control. In particular to this post, if you are calling a downstream system you would stub out what response you would expect to receive. This makes your tests much more resilient but also relies on you implementing it accurately and keeping it up to date.

End-to-End (E2E) Test: Looks more like manual testing, but in an automated fashion. Tries to simulate user behaviour with your application deployed, talking to other real deployed applications (ideally in a test region, otherwise you are entering the realm of synthetic monitoring). It also helps validate all the different subsystems and layers of your application as accurately as possible. There are generally two approaches, horizontal versus vertical, but in our case we typically refer to horizontal where you follow a user’s journey from start to finish, rather than vertical where you focus on one system and assert the data in the different layers.

The shape of a team’s testing preference is really interesting, I won’t get sidetracked by that but if you are interested in different views than just the typical testing pyramid, I recommend reading this blog post.

A general guide for the return on investment of the different forms of testing with regards to testing JavaScript applications.

The thought process

By this point we had been working on the project for about a week and had built out a good solid set of integration tests to ensure what we were building was working. We had also expanded our understanding of the system as a whole. It was tempting to try and run the exact same tests, but instead of pointing at a local WireMock instance, point them at the real downstream services deployed in the test environment. We decided not to go down this route because the integration tests relied on complete control of the environment.

This isn’t an insurmountable problem, we could have built services to manage the data in these downstream systems to be able to recreate the data in the pristine condition that the tests were requiring, or alternatively relax the assertions to be less strict. It would be much harder though to simulate response timeouts or server exceptions. Either way it seemed like a lot of work to get them working in both configurations and also maintaining them into the future.

Our next reflex was to refer to what we knew, and that was to start another Java JUnit project, but instead of using Selenium to drive the behaviour, use a REST client. This is definitely what we knew best, but when we started thinking about the overhead in maintaining this framework, including testing the framework itself, it didn’t really seem worth it.

One other byproduct of this endeavour that I hadn’t mentioned yet was that we wanted the output of the E2E test to be something easily consumable by everyone, not necessarily technical. A JUnit report is not very nice to look at and would give little insight about what the test cases are doing under the covers.

This led us down to look at Cucumber and other BDD frameworks, where we could easily describe the situation and expected behavior. While these reports were more aligned with what we were thinking, the same problems of JUnit approach would exist. We could keep the step definitions very generic to reduce the overhead, but there is still an overhead in doing so, and the less specific the harder it is for somebody non-technical to understand.

Following this rabbit hole further, we discovered Contract Testing. I didn’t cover it before as it isn’t as common as the others, but sits somewhere between E2E and Integration tests. It aims to allow system independency in their API development, but push a “contract” or definition of their API to a central repository, where consuming applications can download it and validate their assumptions. Ultimately I think some approach like this would be fit for purpose for our needs, but it would require the downstream teams publishing their contracts, learning and configuring a framework like Pact and wasn’t fit for our immediate needs.

Finally, we discovered that for previous demonstrations in this domain, Postman had been used to showcase different capabilities that teams had been working on. I hadn’t used this tool in many years, before Swagger pages were mainstream and replaced it as the primary tool for exploring and understanding an API. However, we wanted to leverage something that people were interested in and found that it had evolved significantly over the years and had a sophisticated test framework available in it.

Implementing the framework

The first thing we did was to create a “Collection” as it is known in Postman, essentially a simple JSON file that describes your tests. We added in our current functionality that we wanted to test in an E2E fashion, as well as future functionality that wasn’t yet implemented. We implemented a variety of different scenarios, such as updating a permanent record in the database (to make sure we haven’t broken serialization of the object with any changes) as well as creating new records and asserting that the result is as expected from downstream systems.

An image showing an example Postman collection
Example Postman collection

Once we were happy with the tests as they were, we exported it to our code-base under src/test/postman/collection.json. This way, when people make changes to the code, they can update the corresponding test in the same pull request.

An example test would look like the following:

"name": "Create new",
"request": {
"description": "Create a new group.",
"method": "POST",
"body": {
"raw": "{\"id\": \"{{randomGroupId}}\", \"name\": \"E2E Test Random Group\"}",
"options": {"raw": {"language": "json"}}
"url": {
"raw": "https://some-url/v1beta/groups/{{randomGroupId}}"
"event": [
"listen": "prerequest",
"script": {
"exec": [
"var uuid = require('uuid');\r",
"pm.variables.set('randomGroupId', uuid.v4());"
"listen": "test",
"script": {
"exec": [
"pm.test(\"Status code is 201 and has JSON response body\", function() {\r",
" pm.response.to.have.status(201);\r",
" pm.response.to.be.withBody;\r",
" pm.response.to.be.json;\r",
"pm.test(\"Response body is correct\", function() {\r",
" const responseJson = pm.response.json();\r",
" pm.expect(responseJson.id).to.eql(pm.variables.get(\"randomGroupId\"));\r",
" pm.expect(responseJson.name).to.eql(\"E2E Test Random Group\");\r",

As we wanted to automate the process, the first thing was to identify a command line runner to invoke the Postman job. We found a tool called Newman which does just exactly this. We created a new build on Jenkins, and configured it to pull the project from GitHub, configure Newman and execute the tests. This ended up being just two lines:

npm install newman
$(npm bin)/newman run src/test/postman/collection.json -k

This created a console output of the different scenarios and their outcomes, which was ok, but wasn’t exactly what we were looking for.

❏ Groups↳ Update existing PUT https://some-url/v1beta/groups/some-group [200 OK, 378B, 33ms] ✓  Status code is 200 and has JSON response body
✓ Response body is correct

Further investigation showed that Newman supports the concept of “reporters”, one of which creates HTML output using Handlebars as the templating engine. Our team had used Handlebars extensively so it was no difficulty in transforming the boring report you can see here, to something a bit nicer and more functional.

To make the report something both familiar and awesome, we leveraged an internal Expedia Group™ design toolkit to handle the styling and functionality of the page. This included the interactable buttons, icons, dialogs and collapsible panels. I think the outcome is a nice clean but helpful picture of the tests that can be understood from multiple levels.

We checked in our custom handlebars file, the css and JavaScript files into our project and modified the Jenkins job two liner to:

npm install newman newman-reporter-html
$(npm bin)/newman run src/test/postman/collection.json -k -r cli,html
--reporter-html-export src/test/postman/index.html
--reporter-html-template src/test/postman/template.hbs
An screenshot of the report created
Screenshot of the report we created

We also added a Post-build Actions named “Publish HTML reports”, setting the “HTML directory to archive” to src/test/postman meant that after every build, the latest results would be uploaded to a bookmarkable static URL.

Continuous integration

Now our job was working as expected, we returned to the project to configure our Spinnaker pipeline to execute it after every successful deployment to test. This was done easily by editing .cicd/spinnaker/pipeline.json and adding the following in the stages array:

"name": "runE2ETests",
"type": "jenkins",
"master": "jenkins-instance",
"job": "your-e2e-job",
"parameters": {
"TRIGGER_REF": "${trigger.parameters.version}"
Postman defines the requests, newman executes it on the command line, jenkins executes the command, spinnaker orchestrates the process, handlebars formats the results to html which is viewed in the browser.
High-level flow

Finally, we updated the README.md to contain instructions about how to run and maintain the test suite, and a link to the E2E build that shows a badge with the current status.

[E2E Tests](https://jenkins-instance/job/your-e2e-job/)[![Build Status](https://jenkins-instance/buildStatus/icon?job=your-e2e-job)]

Lessons learnt

It took about half a day to implement the core tests and integrate it with the build. Another half a day was spent on getting the report to look nice and add features (such as viewing the request and response payload). In the end I think this was the right choice for us at this time, for another team in a different situation it might not be, but wanting to share in case this helps others.

  • Where possible, use familiar tools and frameworks rather than introducing novel ones.
  • As tempting as it is, don’t write your own code when tools already exist.
  • Start with the E2E framework so you can write tests as you go.
    We had to spend some time backporting original functionality to the tests because we deferred creating the E2E tests until a week after we started.
  • Using a design toolkit makes building things a breeze.
  • Write a blog post so that others in your organisation can learn from your mistakes and take advantage of your wins.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store