Quality at Mercadona Tech: QA ≠ QA ≠ Testing

Sergio Garcia
Mercadona Tech
5 min readDec 5, 2019

--

How can you improve quality in an agile team when your perception of quality is already high? This question came to see me -like the monster- many times during my first months at Mercadona Tech. In this post, we will explain what we understand by quality assistance (The Ongoing Revolution in Software Testing,Cem Kaner, J.D, Ph.D., 2004) and we will describe the kind of actions we are implementing within the initial iterations.

The first thing that surprised me when I started in this role was the lack of definition about it. Just the simple goal of helping teams improve quality seemed a very high-level topic, difficult to break down. However, management was aware of that, so one of our first objectives was to find a place in the organization where we could maximize the expected value from us.

In that period, there was no evidence of recent major issues caused to business. This fact added even more uncertainty to our activity and triggered some investigation around QA in agile contexts, which was useful but surely not enough to define our new role.

So, back to the monster question: if quality seemed fine and we had successfully lived without a quality assurance team, what was expected from a new quality assistance team?

As a first approach, it was reasonable to think of exploring what tests looked like but, to be honest, we did not discover anything unknown for developers. What we confirmed was that there was absolutely no room for dedicated testers. Not only was it against the team’s philosophy but also it was not going to add any value. The classic quality assurance approach is here avoided in favour of test suites being owned and run (essentially in automated pipelines) by developers. This was natively and successfully adopted by our organisation, being a key element underpinning our culture and fully in line with our engineering principles.

“It’s interesting to note that having automated tests primarily created and maintained either by QA or an outsourced party is not correlated with IT performance” — Accelerate (Nicole Forsgren, PhD Jez Humble and Gene Kim, 2018)

At Mercadona Tech, engineering excellence is not negotiable. To achieve it, there is a constant quest for improvement. However, when the scope of the enhanceable is as vast as everything you see around, chances of failure at choosing one topic are not negligible.

How can we assume that part of a system or process needs to be improved? And then, what is the added value you expect afterwards? Plus, is that change a priority? Whose priority? In an ideal world, a risk assessment session with the team (product, engineering, process, business) would produce a list of prioritised risks and, based on them, an awesome QA plan could be designed and implemented. Fortunately, nothing of this happened and we had to start our own exciting quality assistance journey of observation and monitoring.

Observation

A fundamental part of our workload is following what teams are doing, identifying successful (or unsuccessful) practices, understanding why things happen this or that way or simply detecting potential risks. All of these duties imply a continuous involvement in the daily activity of teams. This is time consuming and must be strategically planned depending on goals. In our case, there are more teams than QA members, thus priorities must be regularly established.

Acting as a simple observer in, for instance, a sprint retrospective, means that anyone else in the room need to fully trust you and understand what your role is. Any suspicion of QA engineers seen as judges or validators is a threat that we have to prevent from happening, as it might result in reducing transparency and therefore restrain observation. On the other hand, interpersonal skills such as empathy, active listening, patience and flexibility are facilitators for a healthy continuous watching. Finally, as we noted at Mercadona Tech, a deep-rooted culture of continuous improvement and blamelessness is the best catalyst for a successful observation.

Monitoring

When you work in a data-driven organisation, the first thing you need to support your arguments is obviously data. A full trace of anything going wrong is essential, whether it is a bug in production or a team struggling to complete their sprints. It is only from shared data that debates must start, reducing the risk of making assertions that can be influenced by biases or misconceptions.

Past data is relatively easy to collect but standardisation might require important efforts. For instance, much of the initial work was iteratively analysing and curating past incident records in order to understand where the more obvious pain came from and the reasons behind it. This was our baseline to define a first monitoring model that we continuously challenge and improve when gathering new incidents data.

A similar approach will have to be applied to other metrics, even if more qualitative. An example of them is the teams’ health status that we have started to measure through an adaptation of the Spotify’s Squad Health Check model. In short, once a quarter, we will obtain a self-perceived health map per team based on a green/yellow/red choice, as well as an improvement trend (up/equal/down) according to their previous status. This map provides us and management with a cross-team vision that helps to identify or confirm common pains. Furthermore, as our teams are self-managed, this is a powerful framework for supporting their own continuous improvement process.

As quality is not a standard, we will have to find metrics that help us understand how we are performing. Our QA team will have to assist in achieving this goal by ensuring a scalable and trustworthy quality monitoring system.

Conclusion

From our perspective, there is no one single way of implementing quality assistance as it is heavily dependent on context (people’s skills and background, pre-established culture and methodology, management support, organisation). Our approach is simply the one we think will work for us and it will be continuously challenged and iterated.

(Almost) without noticing, we are aligning our practices with the Seven Modern Testing Principles (Alan Page & Brent Jensen), which is a valuable somewhat checklist to periodically review our work.

As if it were in a constant feedback loop, observation and monitoring complement each other: on the one hand, we decide what to monitor by observing our teams and, on the other hand, metrics help us decide how to prioritise our observation efforts. Both constitute the core of our QA framework and will guide us on our decisions to help teams achieve the highest levels of quality.

--

--