More to testing than AI and ML can solve
Artificial Intelligence (AI) and Machine Learning (ML) can perhaps solve some testing challenges, but not all testing. The testing vs. checking debate and all the shift-left of checking, have revealed that some of testing is about critical thinking and some of testing is about checking and asserting. It’s often two different mindsets — often it also depends on the problem complexity…
Identifying app states is a simple problem of confirming labels. Navigating an road from A to B following a set of rules is another problem. But the challenge is that, while you can tell your Robot Process Automation to navigate one path, it cannot tell what else might be right or wrong on the pages. ML, AI and RPA needs an existing system to learn from. And sometimes our activities are chaotic and novel, that all we do is experiment. .. or run around fire-fighting one problem at the time.
The Cynefin framework by @snowded enables us to talk about different types of complexity and different approaches to sense-making. Similarly Simon Wardley model of Pioneers, Settlers, Town Planners deals with the journey from novel/genesis over best practice to utility. All models are wrong but these two are very useful in modelling where we are, what to do and how to make reasonable sense of it all.
The testing problems I deal with are both how do we successfully add functionality to a legacy code base, how to delivery minimal successful features and how do we test COTS and how to test in IT operations. It’s all about figuring out how to learn enough business valuable information about the system within the context of the project.
The challenge is that business value is different in each context. Deriving the explicit and implicit business values is mostly a human communication activity. These human communication activities of testing is wicked.