Importance of Visibility and Reporting in Software Quality

Gerard
julotech
Published in
8 min readFeb 16, 2024

--

Prologue

During a software’s life cycle we will undoubtedly find challenges and issues along the way, this is one of three certainties in life along with death and taxes. No software product is bug-free, no matter how talented the developers are or how well planned the project is there will still be issues whether during the development itself or when the software product has already been released to customers.

So what do we as Quality Assurance (QA) Engineers do when we’re facing these issues? Of course once we found the issues we need to notify our stakeholders and developers to get it fixed, right? But, the problem is when an issue is not an urgent one, after we reported it to our team it will be put on the backlog, or some people would call it the “purgatory”. Because often times when a non urgent issue gets put in the backlog it is not visible anymore unless someone accidentally finds it while scrolling through the Jira tasks. So it is stuck in Limbo where the issue is still present but no one is noticing because it’s not on the current sprint and the bug might be too obscure or has a specific steps to reproduce that very little complaints come from the customers.

Which is why at JULO we value visibility and readable reporting when it comes to our issues or bugs, especially ones in Production. What we mean is that we provide a scheduled reporting on our current active bugs and issues on Production, no matter the priority. The purpose of this is to always remind everyone working on the product that there are still issues that is present and needs fixing. This way we provide a clear view of how our product is doing in terms of Production bugs and what action we need to take, which bugs we need to prioritize and so on.

Needle In a Haystack

Take a look of the image below of one of our squad’s Jira backlog.

Example of a Sprint Backlog

Seeing this it would be hard for anyone to find their high priority cards, let alone a lower priority ones. Which is why it is imperative we have the correct and proper filters to search the issues we need.

Jira has provided us with Jira Query Language (JQL for short) to filter out specific issues that we need. This works similarly with SQL language where you can give certain conditions and parameters to filter out the exact Jira cards which met those parameters. JQL is the foundation of our visibility and reporting in JULO.

Of course we can’t just list everything with JQL and then present them as a report. That would just make the haystack smaller, but we still need to find the needle. Which is why after gathering the appropriate data from Jira we need to form them into a readable report that is easy to understand. Here is an example of what we have implemented in Julo.

Generated Report Example

From this report, anyone reading can easily conclude how many issues that we still have in Production, how many are High priority, how many new issues are there, etc. Additionally if anyone want to see the details on each metrics, we provided a URL for them to see the complete list of issues on Jira. This way we provided the summary that everyone can read and understand along with the details of each metrics if one wishes to read them.

Implementing Automated Reporting

Of course we can always do the hard yards, checking and following up each issue manually by looking them up on the sprint board or by using a JQL filter and reporting them manually to a Slack channel. But, we can doesn’t mean we should. This kind of task is very tedious and repetitive, which can get frustrating very quickly for QA Engineers to do. Which is why we implemented a way to fetch these data that we already have on Jira, compiled using JQL, and we send them to our Slack channel using a Slack application.

By integrating a Slack application that can get data from Jira directly it reduces the manual man power needed to compile these data and processing the data into a readable report. It also reduces the chances of human error during report creation, ensuring the data we gave is accurate and based 100% on valid data we have on Jira. Here is a very simple flowchart of how we implement this method.

Flowchart of the Slack API Integration

From this flowchart you can see we have 3 key steps before sending the final report to a channel.

  1. Executable script
    This consists of the necessary API credentials for bot Jira and Slack. Here we also specify the JQL filters that we want to showcase, what data we want to fetch and process to a report. We also defined our target channel where we would send the report to, along with the necessary API to send the message using the Slack block format.
  2. Integrate Data Fetched to Slack Block Message
    As mentioned previously, just gathering data from Jira would not make for a good report. Which is why we process the raw data we have fetched from Jira to readable metrics and numbers we can put on a report. After we are done with processing the data we would need to put them in a slack block message format, which is required for us to send the report.
  3. Script Execution
    Here we only need to execute the script to send the report to our desired channel. We can use a manual trigger from our machine to send a specific report needs (ex: Feature testing progress report, Regression testing report) or we can setup a cron job to schedule a report to be sent on a fixed time based on the team’s needs. Additionally, this could also be integrated to Jenkins pipeline to trigger every time a deployment happens.

Flexibility and Script Structure

Using this implementation method we basically are free to fetch any metrics from any data source, given we already have the necessary credentials and API call. The example we gave here is from Jira, but we could realistically use any number of data sources to present as a report using a script and Slack API. From Jira alone we can report on a number of different metrics and data, from the previously mentioned Bug and Defects reporting to Sprint performance, IT security defects, etc. As long as we have the correct parameters and query we can create any report we need and send them to our respective channels.

On this section we will discuss briefly on how the script actually works and how you can implement this method for your use cases. This will contain very generic explanations to ensure it can be adopted easily to any working environment.

As mentioned on the flowchart above, there are several key components of the executable script, we will be explaining them here. The executable script itself can be written in any programming language, as long as it has the slack_sdk library. In this case, we are using Python 3.

Slack App Token
This is essential for the integration. This will basically be your key to connect your script to your workspace (Slack can be replaced by any workspace application you use). This token should be unique to your application and will be used to send the report to your channels. If you are using Slack as your workspace, the Token should look like this

Example of Slack Token

Data Fetched from Source
Again, this could be from any data source you would like to fetch your data from, but on this instance we will be showing an example from Jira. What you need to fetch issues using JQL are the following:

  1. Your registered email on Jira
  2. Your unique Jira token, please refer here for more details
  3. JQL query

Once you have all 3 items all you need to do is put them as a request on Jira’s search API below:

# JIRA query request
def jira (jqlQuery):
url = 'https://juloprojects.atlassian.net/rest/api/2/search'
auth = HTTPBasicAuth(email, jiraToken)
data = {'jql':jqlQuery, 'startAt':0, 'maxResults':100}
return requests.post(url, auth=auth, json=data)

The response should be a raw JSON containing the Jira issues from the JQL filter. What you do with this raw response is fully up to your reporting needs and use cases.

Target Slack Channel
This one is pretty self-explanatory, basically where you want to send your report. In this case — a Slack channel. However, this can be substituted with other collaboration messagingt tools that you use.

Slack Block Message
This is a format specific to sending message using a Slack API. Slack block is a JSON-styled text body that you can send via the Slack chat_postMessage API. You can use a Slack block kit builder to construct your message using very helpful UI and preview tools. The JSON format allows you to freely input variables on to the message block on your script.

Slack Block Kit Builder

Slack chat_postMessage API
This will be the API you need to send your composed report to a Slack channel. You would need the following as request parameters:

  1. targetChannel: Where the report would be posted
  2. message: The report body in the form of Slack block
  3. slackToken: The unique Slack app token mentioned previously
# Send message to Slack
def slack (targetChannel, message):
WebClient(slackToken).chat_postMessage(
channel='#'+targetChannel, blocks=message)

And that’s about all that you need to start integrating the script to your workspace.

Epilogue

As mentioned in the prologue providing visibility is essential to maintaining a software’s overall health and quality. To do that, we also need to be flexible in how we provide such information. Software quality is not only determined by bugs and issues, but also by things like security defects and feature improvements. If you are looking at it from a Scrum perspective, it could stem from Sprint performance, like completion rate, etc.

That’s why we decided to go with this script-based approach where the process of getting metric visibility is defined in code. This allows us to reuse the code to freely fetch the relevant data that we need, and process it in the way that can provide valuable information to the team. By having this flexibility anyone can start creating similar automated reports based on their team’s needs and urgency and quickly implement it to their reporting schedule and get better and faster visibility.

--

--