Reframing a product context for your Automated Test
Don’t worry, these are not all technical details. I am not going to tell you how to write code or build your architecture and framework. Focus on the monument. Then focus on the same monument in its natural habitat, with the scorched earth leading to it. Focus on core ideas worth thinking about.
There is an infinite amount of context, competence, complexity and configuration in any serious software product. What we want to validate on behalf of our customer is what you could call a crystal clear understanding where the product is going wrong. We anticipate how things are going to fail. We use our attention to detail. We are like psychics. We dream expected results.
When designing any tool or process that protects us from production incidents, I like to reframe the problem multiple times by asking questions. This exercise allows you to think what you really need to automate, how you need to automate it and what precautions you would want to take before something goes wrong.
This is my funky list of reframes and paradoxes.
Can we complete all phases of the flow. When a change is done in the software, how frequently are we able to validate all the phases work seamlessly together and the endgame is top notch quality. What can we do to help.
Can we complete all phases of the flow, also using different personas. As I pay for a different, higher service level with additional bells and whistles (it comes with an avatar), options and customization, is the added cost of my purchase justified with a better quality and service level as the normal user. Do we have empathy for our most valued customers.
Can we complete all monetary transactions. We do a correct billing on the charge. We write a correct receipt. We give a correct subscription period. We offer the customer to renew their subscription. We offer the customer to cancel their subscription. We store their information in a safe and secure way. Do we comply with the law.
Can we complete all monetary transactions with 99,99999% reliability. While our system has 123 million users logged on, we can still add a customer. We should be testing basics in production at peak times, because this is when our customer is there. Do we act our age.
Can we render all views correctly using different technologies. Nothing bothers you more than a screen where you can not see half of the content, or an input field that goes out of sight when you start typing in it. Do we know our customer, instead of the global statistics.
If we see something suspicious, what is our instinct, our next activities and what information should we provide for discussion to instantly understand how much trouble we are in. Do we have operational readiness to prevent churn, on both customer side and employee experience.
If we see something suspicious, who are the right single points of contact to resolve the problem we see before we cause a delay for the software release. Do we take things seriously.
How much confidence do we want, before we allow a million people to use our software. Do we care about our reputation.
How much confidence do we need, before we allow a billion people to use our software. Do we care about not going bankrupt next month.
How many issues can we allow in a software that is in production and which stakeholders do we require to allow each individual issue to exist. Do we care more about statistics or the actual impact.
Is our automated coverage and behavior recorded and observed by anyone who understands the product offering. Will the automated results be audited by test engineers. Do we understand what we just delivered. Do we really know-know what we are getting ourselves in to.
The author wishes to significantly reduce the frustration and number of disappointments experienced by millions of people in everyday life. Why you are reaching for a goal is equally important as how you get there.
Originally published at https://www.linkedin.com.