Manual testing is dead!

Long live Manual testing

Jan Olbrich
Mobile Quality
6 min readDec 7, 2017

--

Manual testing is still the most practiced kind of quality assurance. Even though developers created a lot of variations for automated testing, we are not able to get rid of it. In prior posts, we’ve looked into a lot of different types of automated testing. There were options to test small code units, the UI, the behavior and even testing with chaos. Considering all these different ways, why should we do manual testing?

Con

We all know reasons not to do manual testing. I guess the most often used is: time consumption. A person sitting and executing a test takes maybe on average 15 minutes. If there are 400 test cases, this equals to 150 workweeks (40h/week). Even if you have multiple testers, you will need months to execute these tests. Due to this, we reduce the number of tests executed every time. This reduces our test scope and in the other areas, we pray for our developers staying true to the Single Responsibility Principle.
If you have ever done manual testing, you probably know how boring and monotonous it is. Your mind starts to drift off and in the end, you lose precision. Small changes will be ignored since you are not fully committed to testing. It’s okay, we are all the same ;) But this is a problem, we need to be aware of.
I have one more reason… We want to release often. Ever wondered how we can release when there are months of testing prior to release? Needing two months for the release process results in having 6 releases per year, which isn’t a lot. So manual testing is often discarded entirely.

Pro

Okay, since we’ve discarded manual testing, why do I write about pros? It’s actually quite simple. We’ve automated all our testing and it takes 2 hours in total to complete (let’s ignore Chaos testing for now). We know we did all the specifications correct since we’ve used BDD. And still, after the release, there are reports of users being unhappy. How can this be?

When all the tests are automated, manual testing transforms into looking for entirely unexpected behavior. This can be bugs, but also imperfect UX. You can find ways to improve your app since you don’t have to look out for specific bugs. At the same time, you ARE testing your app. Developers and QA can only create test cases for possible problems and expected behavior. There is a reason it is called “unexpected”. So with manual testing, you can find bugs, which no one ever imagined happening.

One more reason for manual tests exist. You can introduce them to any kind of software. Even large legacy systems can have manual tests. And don’t forget, manual tests are better than no tests at all. This shouldn’t make you shy away from automated testing, but as a step in between, it’s an option you should use.

Process

In the most basic way of manual testing, we have a person executing test cases corresponding to a test plan. Having executed all test cases, a result can be created, which describes the state of the tested software. Let’s have a look at all the elements and how they are constructed.

  • Tester: the person executing test cases and following a test plan
  • Test Manager: creating test plans and test cases
  • Test Case: describing every step and expected behavior of a test
  • Test Plan: contains all tests needing to be executed to fulfill the requests
  • Test Case Management: a system to manage test cases

All of these elements make up classic manual testing. Often we can access metrics to see the current state of our release. At the same time, these metrics can be used when creating the next Test Plan.

Test Case

Test Cases are not that easy to create. There are even entire blog posts how to create them. But let’s do a crash course, which information is actually needed and what for.

Looking at a test case, we need a description. It’s a summary of what the test is doing. Furthermore, we need a result. What do we expect this test case to do? Having these two things, instructions are missing, how to achieve the result. These are step by step guides navigating you through the application starting at the beginning.

Don’t write instructions depending on other test cases!

Often you will need data (such as credentials for a login), so these also need a place in our test case. And in the end, we need to be able to document our result. Especially if it differs to our expected result. Even if there is a difference, it can still mean, the test passes (maybe a feature changed the output slightly). So a passing result would be nice too.
So the minimal components of a test case are these:

  • summary
  • expected result
  • step-by-step instruction
  • test data
  • actual result
  • Pass/Fail

This is all nice, but we can improve on this. Let’s add some metrics. How long do we expect this test to run, and how long did we need? How much impact will this test have, if it fails? Which state is the test currently in? Sometimes tests get automated. This would be nice to track too.

This is what pops into my mind at once:

  • id for management
  • expected runtime
  • actual runtime
  • severity
  • state (blocked, running, done, open, etc.)
  • automation state (automated, manual)

With all these information, we can derive some rules for test cases:
1. They need to be simple.
2. They need to be written for the End User.
3. Do not repeat.
4. Do not assume.
5. They need to be identifiable

Having these we can improve already in our test cases.

Crowd Testing

Lately, a different kind of testing was introduced. Having the internet and thus the option to contact and manage a lot of contacts, businesses started to emerge, which focused on distributing different tasks to normal users. Crowd testing is one of these. It’s focused to distribute your app to a specific number of users. You can create criteria on which to select these users and they will test your app. For every bug found, they will receive some kind of prize money. This is the incentive for them to test for you.
This type of testing can be quite useful, as the crowd can be specified to your app’s target group. They know how to crash apps, and what to test and at the same time, you can access quite a large device pool. Sadly this doesn’t always help to pinpoint bugs. For this, precise instructions to recreate the bug are needed and in my experience, they lack more often than not.

Explorative Testing

Having looked into the classing manual testing, we should also look into one more type: Explorative testing
With all the agile methodologies it is now on the rise, as our stupid tests are done by the machines and we can use human intelligence within our testing. For this different type of tours have been created. I will only describe a few of them, but you can look a lot of them up at: List of tours.

The Back Alley Tour

We tend to use apps the same way. A lot of features are used daily and some of them never. So testing focuses on the often used features and disregards the rest. In this tour, the tester is asked to use all those features used maybe once a year. If you use feature tracking, there is your guide.

The Super Model Tour

What is important in our world? Looks! Don’t use the features, instead just open the app, and do something. Check how the first impression is and not how the feature is. Does it look nice? Is it easy? With this tour, you can improve your UX quite a bit.

The Guidebook Tour

Ever been a tourist? You’ve probably used a guidebook to see all the interesting places. For us in software testing, the guidebook is the manual. Does the manual help or confuse you? I tend to say, when you have to write it down, your UX is wrong, but sometimes a manual is needed. This tour will check whether it is still up to date and correct.

Conclusion

We’ve looked at manual testing and realized it’s use cases. You should never disregard any method if it can improve your quality. Sometimes it has high requirements, but it in the long run even these will be worth it. Even when all your test cases are automated, running a tour will help. On the other hand, I have my reservations about crowd testing. Often companies tend to specify crowds as ~12 people and in my opinion, this is not a crowd. You should consider at least testers in the hundreds, otherwise, you’ll probably just get a few flukes.

--

--

Jan Olbrich
Mobile Quality

iOS developer focused on quality and continuous delivery