A good principle to follow in automated tests, particularly UI tests, is identifying a problem and failing the test as early as possible while also providing an error message which makes it easy to identify the root cause.
Let’s assume we have the following test case:
Let’s imagine this test failing at step 3 (Click on the first search result) with an error message like:
Cannot find element “searchResult”. Without debugging the test execution, can we quickly tell what the root cause is?
Not really. It could be a problem on the search results page, for example the search results not being rendered (or being empty). However, the problem could also be that the search results page never loaded in the first place and we are still on the homepage. In other words, our step 3 is looking for a UI element on the wrong screen. …
The most common use case for visual testing is regression testing using baseline images. However, there are different aspects of visual testing worth discussing too. We will cover template matching (using OpenCV), layout testing (using Galen), and OCR (using Tesseract) and show how to seamlessly integrate these tools into existing Appium and Selenium tests. We use Java (and the Java-wrappers for OpenCV and Tesseract) but similar solutions can be achieved with other tech stacks.
This is a companion-article for a lightning talk given at Taqelah in Singapore in September 2020 and again (in shorter form) during the Selenium Conference 2020. …
Ideally, automated UI tests never fail. Green builds, 100% success rate, happy engineers. In reality, of course, this is not always the case. We would even question how useful tests are that never fail.
In case of a failure, the question becomes how fast we can figure out why a particular test errored. Which step of the test failed? Which user action is affected, which UI element isn’t where it’s supposed to be, etc.? Did the test reveal a bug (hopefully) or is there a problem in the test script? And probably most importantly, which team or engineer needs to take action to fix it asap. …
During a recent code review, a discussion evolved about whether we should sanitise arguments of JQL queries. I was quite surprised to learn about the JIRA community’s stand on this. I understand the points raised there and generally agree with most of them. However, I wonder whether statements like
JQL is not SQL, and you can’t inject anything dangerous into it.
might create a false sense of security. So, let’s have a closer look.
JQL is a query language to search for tickets on a JIRA instance. Unlike SQL, it doesn’t allow to modify any data. …
Spring is a widely used and established Java application framework. The Page Object Pattern is the de-facto standard for implementing UI tests in an object-oriented manner. In this article, we will learn how we can combine the two to simplify writing these tests. We will use an Appium test (executed with JUnit) as an example. For Selenium, the code remains mostly the same. For other test automation tools, the concept applies too.
First, let’s have a look at a test that doesn’t use the Page object pattern. We use a simple login test for the Carousell app. …
Somewhere along our mobile test automation journey most of us reach a point where we consider moving to a cloud platform. The advantages sound promising: a large variety of devices, no maintenance effort, and flexible scaling. The reality is slightly more complicated, we must always understand our exact requirements before purchasing a plan with any of these vendors. While in certain cases building a device lab on-premise can be an effective solution, cloud providers are an exciting and useful tool to take any mobile test automation to the next level.
In this article, we describe how to utilize AWS Device Farm’s public cloud in a way that we can easily integrate into a CI pipeline. For this example, we use Java, Cucumber, and Appium but the concepts apply to other technologies as well. …
We move fast at Carousell. We update and release brand-new versions of our Android and iOS app every week. This adds up to a whopping 50 versions each year, each targeting multiple marketplaces with different feature sets and country-specific customisations!
On top of that, there are also nightly releases and the occasional hotfix. Not to forget, we continuously deploy new features on our Web platform.
At Carousell, we take updates and feature roll-outs seriously. But with so many frequent updates, this begs the question: How do we test all this to ensure our users get the best possible experience?
Let’s rewind to early last year, and take a look at how we traditionally tested our releases. Friday, our release day, has always been a dreaded day for our test engineers. We would come to work and install last night’s release candidate, before verifying its stability by manually executing our “sanity tests”. …
Many times, especially when running automated tests against mobile apps, it is important to verify that the correct version of an app is used. This article describes easy ways to query the version information from both Android and iOS app packages using either command line tools or Java libraries. Both approaches can be easily integrated into a build pipeline.
For Android, one approach to parse meta information from an APK file is to use
aapt like this:
aapt dump badging /path/test.apk
This prints a lot of information so we need to trim the output to what we are interested in:
aapt dump badging test.apk …