Can You Measure The Quality?

Anas Fitiani
4 min readMay 11

--

Photo by Diana Polekhina on Unsplash

Literally, this was a question (in objectives context) that came out of one of the stakeholders meetings.

That day was a nightmare for me as I’ve been in for some time in this field and I KNOW that most of the time in the software life cycle, life is real — title-only stories, acceptance criteria based on hypothesis, open tickets that were released in the 1998 summer, and even pushing the feature (well-dressed bug) then creating the ticket for tracking purposes…

and so many other real-life examples.

So, I started with the most basic thing as a trial (to see how deep the problem I have).

After several meetings within the QA department, we agreed on a basic tagging process to filter out tasks based on squads as well as counting the actual testing phases and re-opening for a singular task.

We ran across squads for a sprint (2 weeks) so we will have some data to look into. Here, I faced the fact that the team “suppose” to back-me-up is not sure about what’s happening! I don’t blame them at all, don’t get me wrong, the process we discussed and approved was driven by me.

The first report was a disaster in all meanings so we could not share it due to inaccuracy and objective lacking — it measures nothing, it’s more of data that cannot be processed or build a decision on top of.

We paused a little bit and we realized that the process of measuring the quality cannot be built on top of a ticket-tracking tool or the personal judgment of the assigned QA Engineer.

The Revolution (In our mindset)

We revisited our previous approach… Kidding, we threw it in the bin and started all over again.

We started with:

1- Meaning? need for? look for?

We learned that when a manager, stakeholder, or business owner asks you to measure the quality, s/he/they:

a- Look for the ROI out of the quality engineering team. (I paid extra for the other apple, so I have to know what makes it unique… and it should taste excellent).

b- Need a measurable meaning of the software — in a business tangible form for exposure, investment, and/or possible partners.

c- Need to pin-point the weakness and sell out the strength.

d- Need data to build technology decisions on top of — hire? no? utilization? tech-stack.

2- What to measure?

a- Squads
b- Business vs. Tech
c- Releases and deployments
d- Infrastructure
e- Delivery and growth

3- How?

We came back to where we started, haha, joke aside, this time we were ready to measure the process-flow and work that all development department is doing and map it to the above “ Meaning and What to measure”

Simply, a basic request from management was:

Scenario A:
Management: We are not sure how the flight’s product and development are doing.
Anas: Can you please be more vague…just kidding. Certainly, management doesn’t know if this product is on track or not, are they hot-fixing much? Are they producing many issues?

Scenario B:
Management: We hired extra 5 resources this quarter.
Anas: They need releases and performance visualization on before and after the hiring.

Scenario C:
Management: Are we stable?
Anas: Production environment statistics, open issues along with severity.

….And so on and so forth

So, as a result of our time and effort, we come up with a next.js app (we are part of the technology team, PDF report is not the best representation) that’s served through one of our domains.

We categorize our report topics as follows:
1- Releases
2- Hot fixes
3- Deployments
4- Production status
5- Q vs. Q
6- Analysis docs for the occurred hot fixes
7- Quality improvement recommendation letter.

We frequented the report on a monthly rhythm, and it’s worth mentioning that the report is serving real-time data fetched from related data sources.

The report is shared and can be accessed anytime by any concerned party.

Lets take an end-to-end example of one of the report topics and connect the dots

Hot fixes

We build up an API connection with our tracking tools and deployment database to fetch the hot fixes that were created within specified time boundaries then, we ran the below checks to practice the quality:

1- Compare with similar time boundaries based on business recommendations.

qvsq
Q vs. Q Hot fixes count

2- We understand why it happens through Analysis provided by the team and validated by the assigned QA engineer over that squad.

3- We marked the cause, this is a very beneficial step to understanding how much our partners are dependable or where we need to tighten.

Hotfix by cause
Hot fix causes

4- We marked the product squade (E.g. Flights -> connectivity)

5- We marked the division (E.g. Flights -> Web technology)

6- We imposed a corrective action and monitoring for that particular web service or UI feature.

This is one of the quality practices that we followed to track any issue or bad spike.

This is a glance over the Quality measuring reporting we publish to management. We keep getting some more requirements to compare data and visualize it which tells about the management engagement. One more objective is achieved!

The base of this report is the people and teams, this report will not be able to improve the quality we deliver without a proper action by the teams.

Keep on Learning…

--

--

Anas Fitiani