Building the UX of our Internal Testing tool

Introduction

In the past year that I have worked with Vitech, I have worked on some interesting UX problems. This one in particular is unique because of its domain, scope and the users it serves. Here’s the task we had at hand: An existing GUI based software testing tool that was long pending a facelift, was to be integrated with another command line based testing tool (both developed in-house). We took advantage of this to build one single tool with a seamless user experience. We decided to call this new tool “Automated Regression Testing” (ART) although there were suggestions floating around to call it Fully Automated Regression Testing 🤦‍

The Team

Bhavin Mehta — React Developer
Gautam Krishnan — User Experience and Visual Design
Anurag Yagnik and Vladislav Sheykhet — Management and Direction
Additional thanks to Sarath Meepagala and Aloysius Pollisco

Ideation

We already had a large part of our ideation done for us, as we were redoing an app that our users were already using. Also, we knew our audience well (at least in this iteration when the tool will only be used in-house). The part we had to figure out was to make sure that we deliver a better user experience, while not drastically changing the way our audience already use it (our organization is 1500+ employees strong). Also because the scope of this app was relatively smaller than the other products we work on, the scenarios that we had to think of were limited.

Goals

  1. As the existing tools were developed only for Windows, we wanted to make this cross-platform. We chose to build it using Electron + React.
  2. It would mainly be used by QA testers and developers to run their own tests. It would probably also be used by managers to run a test or two from time to time. We might give out the tool to our clients in the future for them to be able to run tests on the customized products we build for them. 
  3. Our (other) new generation products are derived from Material Design, so this will follow suit.
  4. This new tool must address all the UX challenges of the existing app.

User Interviews and Identifying UX Bottlenecks

Our existing legacy in-house testing app

Along with the need for a major visual facelift, there were many UX problems that were to be addressed. I conducted user interviews with two of our employees who are QA testers and use the existing app. I also observed them use the app and asked them to think aloud as they were performing each action. I asked them to go into every feature and show me a real world scenario of how they would use it. Here are some takeaways I got from these conversations:

  1. The main interface consists of mandatory test parameters and optional reporting parameters. Currently, all of these are parameters are built into the same screen, often intermixed. Due to a lot of scope creep over the past many years, some parameters were completely unnecessary and unused, and there’s no indication on which of them are required.
  2. There are some fields on the UI that are dependent on another. Currently, there’s no visual cue for this. They are also not located next to each other.
  3. User expectation: All the user needs to do is to enter/select values for the required fields (just 5 particular fields from the above screenshot) to consider it as a valid test to run. Our users run tests from time to time by changing the parameters. Likely scenarios would be: Running the same test against different environments (local, dev, QA etc.), and running different tests on the same environment. The tests themselves can be saved, but there’s currently no way to save environments and they must be manually set each time.
  4. The existing app creates a new folder for every saved test like the one seen on the “Test Report Location” field in the above screenshot. The user is expected to know that he/she must point to a test folder to retrieve a saved config (as opposed to simply choosing a file).

User Personas

Our developers, QA testers and managers are our only audience as of now. However, we expect this to expand to managers, developers and testers in our client companies. 

The prospective use of ART outside our walls would mean that the app cannot contain the jargons and labels used and understood only within our organization. Also, there might be no way to know who our users really are. Although we didn’t have to worry about this in the current iteration, we made sure to document it while building this version.

Prerequisites and Assumptions

We streamlined the features of the current app into the new one, but there were some features we couldn’t simplify. As this is an enterprise application, we do not expect this complexity to go away because that would require a lot of the other systems to change as well.

  1. ART is strongly integrated with JIRA, where all the test cases are stored. The user is expected to know how JIRA works.
  2. A user can choose any Environment to run tests. Next, a Project needs to be selected, followed by a Version and a Cycle. The Versions depend on the selected Project, and the Cycles depend on the selected Version (diagram below). The Cycle is the last level of the hierarchy, under which individual test cases are present.
  3. Tests vs Test Cases: Users create tests in the app, which run test cases stored in JIRA.
User flow for running a test: The links signify that the values change based on the choice you make.
Every system has an inherent amount of complexity that cannot be removed or hidden — Tesler’s Law, also known as The Law of Conservation of Complexity.

Prototyping


Iteration 1
The idea was to separate the test parameters from the fields that were optional and used only for reporting purposes. Also, batch and individual tests were separated, so that one can switch between the two as required. We also introduced a concept of creating environments — where users can configure all the instances they want to run their tests against, like Dev, QA, Local, etc. Users can also create new tests and load test files from the disk by clicking on the floating action button (FAB).

Feedback from Iteration 1: We tested this iteration with a few of our employees, ones who use the current in-house testing tools. We gave them a few tasks to perform and noted a few things — How they reacted to the introduction of a new concept (environments), how easy or hard it was to adapt to this tool, and conducted think-aloud sessions.

From this, we found out that the concept of keeping Test Execution and Test Configuration (this is what we chose to call the report parameters in this iteration) separate was a poor idea. The report fields were not exclusive, but an extension of the test parameters itself as the report is generated after the test is run. But to run the test, one had to get back to the Execute Tests tab and click on ‘Run’. Although our intention was to separate two features, we ended up creating a disconnect

Iteration 2
We thus decided to drop the idea of tabbing the test parameters and the reporting parameters and tried a different approach. With no tabs, we also did away with the large app bar on the top to place more emphasis on the content area.

We initially conceived the notion of separating batch and individual tests on two different screens, but we dropped that idea as it required one extra click. The discoverability of the ‘Individual Tests’ feature was poor in this approach. Instead, we added an extra field and made it optional.

Material Design recommends that the FAB can attach to an ‘extended app bar’ on the top on large screens, but we just got rid of the huge app bar. We decided to place the FAB to the bottom, as now all the major actions would be on the bottom.

Feedback from Iteration 2: While we thought that we had placed all the necessary actions at the right places, we made some interesting observations:

  1. Our users did not save tests as much as we thought, which also means that the ‘load from disk’ feature was underused.
  2. Because the number of parameters required to run a test were very few, our users found that changing them would be easier than saving and retrieving tests.
  3. The Gear icon did not convey the message that it was meant for configuring environments, and our users thought that it was a general settings icon. They expected a dropdown menu, but it spawned a pop-up.
  4. There were some serious problems with the FAB: The ‘Add’ icon spawned 2 new actions — ‘Add’ and ‘Load’, which were confusing. Also, the ‘Add’ action was itself more ambiguous, as our users thought that new environments could be added from here (all it really did was to clear the form to create a new test).

Iteration 3
In this iteration, we took notice of the many actions one can perform from the home screen and rethought them. 

We got rid of the following actions: Save, Save and Run and the FAB. We tucked them all under the hamburger menu. All of these actions were unnecessary to run a basic test. There were now just 2 actions — the ‘Run’ button and the hamburger menu for everything else. We would have to rethink this as more features are to be added in the future.

Visual Design

We went with Material Design to make sure our app speaks a visual language that is widely understood, and because our app would be cross-platform. It would provide users with a seamless experience while using it to run tests on different machines running different platforms.

ART Information Architecture

With the visual design, here are some of the design decisions we made:

Pro Mode
Every field has a short help text below it. We initially spent some time deciding if these texts should be present or not. The consensus was that although it is useful for users to get started, it only acts as an obstacle after the first few uses. We thus added a “Pro Mode” option for users to turn off help texts when they had no use for them.

Menu vs Menu bar
Why did we go with a separate hamburger menu when there’s an OS level menu bar provided for desktop applications? There are several reasons for this: The menu bar on a Mac is located on the top and does not move with the app. This would mean that the user has to go to the top left corner of the screen each time they wish to access the features of our app (which doesn’t need to take the entire screen). One of our use cases is that a user might have to use the app on different platforms to run tests. Because the OS level app menu is positioned differently on a Mac and Windows, they’d have to juggle between these two approaches. Having a dedicated menu button unifies the user experience across platforms.

More than being just a convenient way to shove all the features into the hamburger, it felt like the best option to keep the test area distraction free. We took cognizance of all the lesser used features and hid them, while also grouping the items logically.

Future Scope

To rehash on the above design, we did it only with our current audience in mind (who are familiar with the old application) and for new QA engineers who know the basics of testing. I’ve made a note of some of the potential changes we’d make while giving out this tool to our clients. These maybe no-brainers, but we ignored them for this iteration as we focused only on an already-familiar audience to deliver a simpler experience. 

Progressive Disclosure
Our users who may be unaware of the Project > Version > Cycle concept need to be directed that the Version values depend on the selected Project, and the Cycle values depend on the selected Version. So we would reveal the Version field only after a Project selection has been made, and so on.

Explaining Environments
We must make sure that the users know to create environments and understand how they work before anything else. This should be our highest priority. An on-boarding screen could be used to create the first environment, and the last item of the environments dropdown could be an ‘Add Environment’ option.

Recent Tests
As the scope for this tool grows, more features would eventually be added, which would translate to more test parameters. We expect this complexity to reach a stage when it would make more sense to save and retrieve tests than to quickly change a few parameters. At that point, it would probably make sense to save a list of recently run tests and the option to retrieve and run them.

Exclusive Web App
We also intend to build a browser based version of this app, albeit for a reason more than just convenience. Currently, the desktop app spawns a browser on the user’s machine and runs the test. However, the web app can be exclusively used to run tests on a remote server. We think that these two approaches work well for people who run tests regularly (developers, QA etc.) vs people who want to run one-off tests.

Reports
Currently, all the test reports are stored on JIRA and the user needs to click on a snackbar notification to follow an external link to view the report. We could pull the logs and reports and show them on the app itself for simplicity. This also becomes important when clients may not have JIRA permissions, but need to view the reports.

Keyboard Shortcuts
This may not seem so important at the first glance, but our power users would want to configure keyboard shortcuts for simplicity. Personally, I can think of scenarios to use keyboard shortcuts to juggle between local, dev and QA environments, but the real utility and practicality need to be worked out and tested with users.