Simply speaking model-based testing is a testing technique when we define an abstract model which describes a software behaviour and then use this model to test the real software.
An example of such a model is below:
This model describes a login functionality. Orange blocks show states of the program and arrows show actions which the user can perform.
So, instead of test cases with steps (open the window, input credentials, check a message), we have a diagram (a model).
A model is a sort of specification which should tell what we expect in the software behaviour. This software even may not exist before a model creation. Indeed a model can be a requirement for software.
Then when the software is ready we take our model and follow it checking does the app do what we expect according to the model.
How it may help in testing
Instead of many test cases full of text, we have a visualisation of software behaviour. It helps with faster and better functionality understanding.
One more time how would look like a model of login functionality:
If you try to rewrite the model above in regular test cases you will end with something like this:
Some of the details are omitted, it is just to give a comparison of the diagram and text.
Below is an example of how would look like a bunch of test cases. Pretty boring as for me. :)
Many pages of detailed and repeated steps can be replaced with few diagrams.
Keep tests consistent
When we write test cases we may repeat some steps because of testing the same functionality different ways (good scenario, bad scenario, alternative paths etc).
In the login functionality, we have the same first step, open login page and then input some credentials.
But what will happen if our functionality has a new step, allow cookies pop-up for example?
This is what we need to do in textual test cases, input this step into each:
This is the part of test cases writing which I hate in testing. I almost dropped the writing of this article when realised that I need to copy-paste these steps about cookies.
And this is how it will look like in a model:
Just one block is added and everything clear!
In more complex test scenarios we may even forget that we have some similar steps in scenarios because they are not so close and missed from our attention. We update it in one place but miss in another. That leads to test cases incorrectness/inconsistency.
If we try to visualize our regular test cases from above we will roughly have the next picture:
We are going from A to B, prerequisite, action/s, result/s, some sort of linear scenario. Very narrow 1 dimension behaviour, as for a robot.
But how our software is used? Rather like this:
User is going to the app, start to do something, make a fault, retype, check one thing, another, take a tea, receive an urgent task and dropping one scenario going to another, working with any size of datasets, input parameters and their combinations etc.
“Combinations” is the keyword here. Below is a chart showing what percent of bugs can be detected by a single parameter test and by combinations of them (pair, triple etc)
1 parameter test gives in average 20–70% of bugs and 2 parameters already 60–95%
2 parameters combination is called pairwise testing which is a subset of MBT.
With regular test cases we will have to write them explicitly, log in, logout, log in with mistake, login with another user etc.
With MBT we have an option to ask a test tool to generate test cases and sequences of steps. Below I ask test tool to walk over the model for 30 sec randomly.
When to use MBT
When does it make sense to use MBT? Well, due to we are executing combinations of simple small scenarios it make sense to have this small scenarios already stable and working. Otherwise you will not achieve much benefits.
The second point that it makes sense to test critical functionality. The effort in MBT is higher than in regular test approach and if you find a supa-dupa bug in low priority feature not very likely that your big effort will be appreciated proportionally (until you test aero/medical/traffic etc critical apps)
However later when this technique becomes natural and easy then you may complitelly switch to it and apply as main approach.
Simple linear scenarios and tests are good as an initial step in software testing because they answer the question “does a small part of functionality work” and because they have a clear step by step structure and everyone got used to them.
However, it is not scalable for big and complex systems. The number of such scenarios is growing exponentially if you start to multiply all the simple scenarios in one feature and in a neighbour. As well as multiply # of people needed for support, for execution and reporting them, retesting etc. Automation of such scenarios is also a pain even with code reuse.
There are much more issues which MBT may help with, like understanding of functionality itself, communication improvement, save of time for testing, consistency with requirements etc.
The next resources may help with that:
This is the repo for the Model-based testing tool GraphWalker. - GraphWalker/graphwalker-project