Test Automation as a Tool for Exploration & Discovery

Emir Abdülkadir İnanç
adessoTurkey
Published in
8 min readSep 8, 2023
Photo by Kellen Riggin on Unsplash

Introduction

Lately, I’ve been following Michael Bolton and James Bach on LinkedIn, reading their comments on some of the online content that QA professionals post thereon.

Although I cannot find the exact comment to quote Michael Bolton, I recall that he wrote in a comment that implementing test automation can itself serve as an agent of discovery. In the attempt to automate tests, there is also the natural movement to gain a deeper understanding of the system under test.*

As a tester in the software development team, I’ve been implementing Gatling as a load test to measure the performance of a web application that one of our customer use. Going through this process, I was able to correlate my experience with the above idea.

In order to introduce you to the process, I will first need to guide you through the approach the team took to the load test design and functionality of the application under test.

Load test design

The application under test is an ALM (Application Lifecycle Management) tool that facilitates the organization and management of all project-related activities such as requirements engineering, testing, formal reviews, time tracking, etc.

The first step towards implementing a load test was to conceive prototypical user identities called archetypes. Each archetype consists of a set of actions to be performed on the ALM tool that summarize and thus represent an actual person following a given role in the organization. For instance, a project administrator would create a project, add new users, add new user groups, define their roles and rights; a lead engineer would create new project items, perform reviews, and so on.

The second step would then be to translate the set of actions summarizing these archteypes into the technical, domain-specific language of the load test tool; so that the tool knows what to do. The tester can then ideate around the experimental design of the load test to determine the patterns in which the virtual users based on these archetypes are injected into the system during the load test.

Functionality

In implementing the desired behavior for the archetype lead engineer, one of the desired behaviors is to create a review of items within the project. Review creation follows simple steps:

1. Open the Review Hub page, where the user can create reviews. The user taps on the “+” button to initiate review creation.

Open the Review Hub page where the user can create reviews.

2. Select the project and the item type that the user wants to be reviewed.

3. Preview selection.

4. Setup reviewers, viewers, moderators, and some other rules and parameters regarding the review.

5. Preview review configuration one last time and create the review.

6. View the freshly created review.

Implementation Details

The ALM functionality is now clear. Since the process is centered on implementing a load test tool, it’s important to take a look at the http traffic. This can be done via the browser dev tools:

In the first analysis, the actions that the user takes to further review the creation process correspond to these http requests. It can be observed that

  • Once the Review Hub page is opened, there is a GET request to retrieve the resource.
  • Once the user clicks the “+” button, the corresponding iframe opens to capture user preferences in review creation. There are then some redirects, but copying and pasting the request URL in the request createReview.spr?_iframe= will display the screen in the second step above.
  • As the user fills out forms and then clicks the action buttons on the bottom right, the corresponding POST requests can also be viewed.
  • A special feature of these POST requests and their subsequent redirections is that there is a query parameter named execution that takes an UUID as input.
  • Besides this query parameter, the POST requests transmit data in x-www-form-urlencoded format.
Which looks like this when sent to the server: _csrf=e47f4d5a-7647–4d4b-aadc-bb60b4753303&reviewName=test-1-review&reviewType=trackerReview&projectIds=4&_projectIds=1&trackerIds=4391&_trackerIds=1&_releaseIds=1&cbqlId=&_eventId_next=Next+%28Preview+Selection%29

Based on this analysis, it made sense to use a random UUID for the execution request parameter in implementing this process in Gatling scala DSL to create a review:

   
exec(
http("createReviewTrigger")
.get("/review/create/createReview.spr?_iframe=")
.headers(headers_1)
.header("X-CSRF-Token", "#{csrfToken}")
)
exec(
http("createReviewItemDetails")
.post("/review/create/createReview.spr?execution=#{executionUUID}")
.headers(headers_8)
.formParam("_csrf", "#{csrfToken}")
.formParam("reviewName", "#{name}")
.formParam("reviewType", "trackerReview")
.formParam("projectIds", "#{createdProjectId}")
.formParam("_projectIds", "1")
.formParam("trackerIds", "#{crqId}")
.formParam("_trackerIds", "1")
.formParam("_releaseIds", "1")
.formParam("cbqlId", "")
.formParam("_eventId_next", "Next (Preview Selection)")
)
exec(
http("createReviewPreview")
.post("/review/create/createReview.spr?execution=#{executionUUID}")
.headers(headers_8)
.formParam("_csrf", "#{csrfToken}")
.formParam("_eventId_next", "Next (Set Up Review)")
exec(
http("createReviewSetUp")
.post("/review/create/createReview.spr?execution=#{executionUUID}")
.headers(headers_8)
.formParam("_csrf", "#{csrfToken}")
.formParam("reviewers", "1-1")
.formParam("moderators", "1-1")
.formParam("viewers", "13-1,13-2,13-3,13-4,13-6,13-8")
.formParam("deadline", "")
.formParam("_notifyReviewers", "on")
.formParam("_notifyModerators", "on")
.formParam("_notifyOnItemUpdate", "on")
.formParam("_requiresSignature", "on")
.formParam("_requiresSignatureFromReviewers", "on")
.formParam("description", "")
.formParam("editorMode_description-editor", "wysiwyg")
.formParam("_eventId_next", "Next (Done)")
)
exec(
http("createReviewFinish")
.post("/zb/review/create/createReview.spr?execution=#{executionUUID}")
.headers(headers_8)
.formParam("_csrf", "#{csrfToken}")
.formParam("_eventId_next", "Create Review")
)

This implementation didn’t work. As I was thinking it through, it became clear why: The UUID corresponding to the execution query parameter was not just some random UUID. If inspected, it could be seen that this parameter is readily generated on the server and resides in the HTML response to the first GET request to the endpoint /cb/review/create/createReview.spr?_iframe=

<form id="createReviewForm" action="/cb/review/create/createReview.spr?
execution=e228147b4-31bf-4b16-8894-46c73b93279as1" method="POST"
autocomplete="off">
...

It is then possible to capture this value through regex and set it as a session variable in Gatling so that it can be used as the query parameter in subsequent requests.

exec(
http("createReviewTrigger")
.get("/review/create/createReview.spr?_iframe=")
.headers(headers_1)
.header("X-CSRF-Token", "#{csrfToken}")
.check(regex("""action="/cb/review/create/createReview\.spr\
?execution=([^"]+)"""").saveAs("executionId"))
)
exec(
http("createReviewItemDetails")
.post("/review/create/createReview.spr?execution=#{executionId}")
.headers(headers_8)
.formParam("_csrf", "#{csrfToken}")
.formParam("reviewName", "#{name}")
.formParam("reviewType", "trackerReview")
.formParam("projectIds", "#{createdProjectId}")
.formParam("_projectIds", "1")
.formParam("trackerIds", "#{crqId}")
.formParam("_trackerIds", "1")
.formParam("_releaseIds", "1")
.formParam("cbqlId", "")
.formParam("_eventId_next", "Next (Preview Selection)")
)
exec(
http("createReviewPreview")
.post("/review/create/createReview.spr?execution=#{executionId}")
.headers(headers_8)
.formParam("_csrf", "#{csrfToken}")
.formParam("_eventId_next", "Next (Set Up Review)")
exec(
http("createReviewSetUp")
.post("/review/create/createReview.spr?execution=#{executionId}")
.headers(headers_8)
.formParam("_csrf", "#{csrfToken}")
.formParam("reviewers", "1-1")
.formParam("moderators", "1-1")
.formParam("viewers", "13-1,13-2,13-3,13-4,13-6,13-8")
.formParam("deadline", "")
.formParam("_notifyReviewers", "on")
.formParam("_notifyModerators", "on")
.formParam("_notifyOnItemUpdate", "on")
.formParam("_requiresSignature", "on")
.formParam("_requiresSignatureFromReviewers", "on")
.formParam("description", "")
.formParam("editorMode_description-editor", "wysiwyg")
.formParam("_eventId_next", "Next (Done)")
)
exec(
http("createReviewFinish")
.post("/review/create/createReview.spr?execution=#{executionId}")
.headers(headers_8)
.formParam("_csrf", "#{csrfToken}")
.formParam("_eventId_next", "Create Review")
)

Surprise

This implementation didn’t work either. The whole suite of actions finished without any http errors. Yet it became obvious after providing all the right form parameters to all the subsequent requests that they returned the same HTML responses as in step 3:

I didn’t know what to expect. Something didn’t work properly, and what it was eluded me. Since I captured the right UUID and used it as a parameter, everything should have worked fluently, but it didn’t. I was about to give up and decided to talk one final time with my colleague Yordanmm from adesso Bulgaria, who had professional experience in implementing Gatling load tests. He kept asking me simple questions to understand the situation, and I suddenly noticed a crucial detail that I had missed before:

Although the UUIDs appeared exactly the same, the numbers at the end were being incremented by one after each step. This was something I would never have expected since UUIDs don’t “increase” consecutively as they did in this situation. As I noticed that the entire execution URI to be used in the next request with an incremented UUID was already provided in the Location attribute of the response header, I didn’t need to capture the UUIDs separately.

I then adjusted the code to reflect this pattern:

    exec(
http("createReviewTrigger")
.get("/review/create/createReview.spr?_iframe=")
.headers(headers_1)
.header("X-CSRF-Token", "#{csrfToken}")
.check(header("Location").saveAs("executionURI"), status.is(302))
.disableFollowRedirect
)
exec(
http("createReviewItemDetails")
.post("#{executionURI}")
.headers(headers_8)
.formParam("_csrf", "#{csrfToken}")
.formParam("reviewName", "#{name}")
.formParam("reviewType", "trackerReview")
.formParam("projectIds", "#{createdProjectId}")
.formParam("_projectIds", "1")
.formParam("trackerIds", "#{crqId}")
.formParam("_trackerIds", "1")
.formParam("_releaseIds", "1")
.formParam("cbqlId", "")
.formParam("_eventId_next", "Next (Preview Selection)")
.check(header("Location").saveAs("executionURI2"), status.is(302))
.disableFollowRedirect
)
exec(
http("createReviewPreview")
.post("#{executionURI2}")
.headers(headers_8)
.formParam("_csrf", "#{csrfToken}")
.formParam("_eventId_next", "Next (Set Up Review)")
.check(header("Location").saveAs("executionURI3"), status.is(302))
.disableFollowRedirect
)
exec(
http("createReviewSetUp")
.post("#{executionURI3}")
.headers(headers_8)
.formParam("_csrf", "#{csrfToken}")
.formParam("reviewers", "1-1")
.formParam("moderators", "1-1")
.formParam("viewers", "13-1,13-2,13-3,13-4,13-6,13-8")
.formParam("deadline", "")
.formParam("_notifyReviewers", "on")
.formParam("_notifyModerators", "on")
.formParam("_notifyOnItemUpdate", "on")
.formParam("_requiresSignature", "on")
.formParam("_requiresSignatureFromReviewers", "on")
.formParam("description", "")
.formParam("editorMode_description-editor", "wysiwyg")
.formParam("_eventId_next", "Next (Done)")
.check(header("Location").saveAs("executionURI4"), status.is(302))
.disableFollowRedirect
)
exec(
http("createReviewFinish")
.post("#{executionURI4}")
.headers(headers_8)
.formParam("_csrf", "#{csrfToken}")
.formParam("_eventId_next", "Create Review")
.check(regex("""\/regex\/review\/(\d+)""").saveAs("reviewId"))
.check(status.is(200))
)

As a result I was able to successfully create a review using Gatling.

Conclusion & Remarks

One of the most critical contributors to this journey was the method of dialogical, or Socratic questioning. This method can be applied by oneself as well as in pairs. As a solo activity, it is important to clarify the meaning of concepts as much as possible and validate the clarifications systematically. When working in pairs, as done very well by my colleague Yordanmm, asking simple but smart questions to sort out assumptions, misconceived ideas, and inculcated beliefs is crucial. Why inculcated beliefs? Because implementing a test tool can be blinding, as the focus of the tester can easily shift to operating the tool from pursuing an inquiry to truthfully understanding the product. Moreover, keeping a keen eye on factual elements like the http traffic recorded over the browser tools, logs, and so on is essential, as these elements constitute the sources of truth against which the testers can check their understanding.

Notes

*Since I cannot find the piece that Bolton wrote, I cannot directly claim that this is what he meant. This is the version of the idea that I understood at the time I read it.

--

--