How testers win at browser games

Andreas Faes
delaware
Published in
5 min readAug 29, 2022

A few years ago I won free tickets to a kid’s show by being in the top 50 of an online browser game. Small confession: I did not win because I am particularly good at playing games, but because — as a tester — I am good at investigating applications, finding bugs and exploiting those bugs. This article is about the train of thought that goes into these things, but without winning prizes (or rather preventing honest players to win prizes). So consider this blog post a guide to cheating.

I want to use memory games as the game to explore (as an example). You remember them: cards organised in rows and columns (often 4 x 4) face down before you and your opponent (often one of your parents) and you racking your brains trying to remember where the other matching card was placed again. In line with traditions I play this with my youngest daughter, and while letting her win — that’s the kind of loving parent I am — I appreciated how ideal this case was for an experiment in test automation. So I presented it to the other testers in my team as a challenge.

Now why is it an ideal challenge? The problem domain can be explained in a few words — find matching pairs to solve the game — and the actions are simple: open a browser, go to a specific URL and clicks cards until the game is solved. But unlike the business processes we typically automate (where you fill in different values, press a few buttons and the same time happens over and over again) the solution to this problem is non-deterministic, since the order of the cards changes every time you load the game. This of course means that the solution changes time after time, and we need a different approach to solve this problem.

So what did we do exactly? We divided the team in four groups, with each of the groups one of following technologies: Qualibrate, Selenium, Playwright and Cypress. The Memory game we selected to automate was this one (which provided some hints because the source was available as inspiration).

Below is a summary of all the solutions and strategies we came up with, ranked from coolest/funniest to simplest to implement.

1. The cheating method

Testers like cheating it seems, since this was the method that every group (without fail!) implemented, but the strategies came in two distinct flavours:

- The first tactic was reading the solution and then opening the cards in the correct order. We discovered that the solution to the game was hidden in the HTML (the CSS class names of the tiles showed which icon was hidden beneath it) or in JavaScript (the solution is stored in a JS array which we could access). Once we had the order, we could simply open the pairs one by one until the game was solved perfectly (in 16 clicks). A perfect game.

- The second tactic was changing the solution through JavaScript injection (which is truly genius in its evilness) and then always opening the cards in the same order. Although it worked, it proved to be more work (and coordination since we had to change both the HTML and the JavaScript code to have the end result look sensible). But still, a perfect game could be played this way.

This worked well in all of the technologies we used, and once we figured out how to do it — as a group — the end result was very elegant in all of them. Special credits go to Qualibrate, where we are able to sublimate the solution into a three-step flow (go to url, and click on tile 1 and 2), using built-in loops.

This is what it looks like (and it looks awesome):

Solution running in Qualibrate

2. The clever humans method

While the previous method only illustrated the possibilities that machines have over humans, we also thought about how real humans would solve this. We had two iterations of strategies, where one was slightly better than the other:

- The simplest strategy turned out to be turning over every card one by one (so for n cards, that means n clicks), remembering the picture and then matching them. This means that — unless you accidentally match a set because they are next to each other — you solve the game in 2n clicks (which is the worst case scenario but will occur the most).

- An even more optimal strategy is the following: click the first two cards, remember their image. From then on, click on a new card and if you know where the matching card is because you opened it previously, click that one. If not, open the next card in line. Rinse and repeat. This way you solve the game in less than 2n clicks (even in the worst case), and on average a lot of clicks less than the previous algorithm.

This is what the first strategy looked like:

Solution running in Playwright

This strategy was easy to implement in all “coded” solutions, but integrating this logic into Qualibrate proved to be harder (having to bend the way the system was designed to ungodly limits).

3. The brute force method

This was the easiest to create and it worked for any of the technologies. In essence we opened every possible combination of cards and were not actively looking for whether or not we had actually found a match. Imagine we have 3 pairs of cards laid out, this means we always open 15 combinations (starting from 1–2, 1–3, 1–4, … until 5–6). Obviously this method is the most time consuming one, since it does not scale well with more cards: using 8 pairs (16 tiles) we already need 136 combinations to be checked (and thus 272 clicks).

But we found an improvement: when matched, don’t check those combinations again. This on average meant that we had to open 50% less cards (not based on an actual mathematical model, but on what we think we saw after a few runs… so hold yer horses, mathletes).

Obviously no video, since it would be incredibly boring.

Now what were our conclusions?

The first conclusion was that we had a lot of fun. The fact that we were immediately as a group trying to figure out what made the game work, and how we could exploit it made me proud of being a tester. Secondly it demonstrated for us what we already knew: the difference between a tool (Qualibrate) and a framework (the others). Where one offers a wholesome solution in which you can do marvellous things really efficiently, the other offers endless flexibility. It shows that there is no one-size-fits-all solution for test automation, and that is still the adage we have when consulting: to look for the best fit for the customer’s problems.

And if a customer needs help solving a memory game, we now have enough ways to do it. 😉

--

--