Why Framing the Correct Hypothesis Is Critical for AI?
AI risks do not necessarily start with the AI system or application itself. Usually, they creep in well before the development starts, or deployment begins.
Artificial Intelligence, being such a powerful technology, inherently carries a high risk. It is a source of significant new threats that we must manage. This risk is not concentrated in one specific section of AI; instead, it is spread over all the stages and components of the AI system. It includes AI solution design methodology, the AI solution itself, its development and deployment and then ongoing use.
Preventing each of risk means we will have to evaluate each component in its own right and seek avenues of prevention. Even before we discuss prevention, identification is necessary.
It starts with the correct hypothesis
When you want to verify whether you are working on the right solution or not, hypothesis testing becomes utmost important.
Typically, a hypothesis can be defined as — an explanation for something. It can also be an assumption or a provisional idea, an educated guess that needs further validation.
Unlike our belief, which doesn’t need validation or evaluation; a good hypothesis is always verifiable or testable. It can be proven to be true or false.
From a scientific perspective, a hypothesis must be falsifiable, which means, there would be certain conditions or scenarios where this hypothesis or an assumption will not be valid.
Most importantly, you must frame the hypothesis as a first step before testing it or thinking about a solution. If you don’t follow this rule, you might end up retrofitting a solution, which may not yield full results.
You must frame the hypothesis must as a first step before testing it or thinking about a solution.
If you start with a flawed hypothesis, you would most likely end up selecting the wrong solution, and several risks would seep in. Some of the risks associated with an incorrect assumption are — financial loss, usually in the form of revenue leakage or operational losses, reduced customer loyalty and increase in frustration, risk to brand reputation, and several others.
You may be wondering, “Why is it so important to form a correct hypothesis?”. Let us take an example.
A curious case of Acme Solutions
Acme Solutions (a hypothetical company) is experiencing significant growth in their products & services. Acme’s customer base is continuously increasing. Increase in customer base is leading to an increased number of calls to their incident management support centre.
To keep up with the growth, Acme Solutions wants to increase headcount in their incident management support centre. However, acknowledging human limitations, Acme is keen on implementing an artificially intelligent chatbot solution. By doing this, they expect to provide adequate support to incidents reported by customers, without additional headcount. Customers will be able to interact with chatbot and report their problems, perhaps, get instant solution too.
Should Acme Solutions embark on AI chatbot project?
If this project is completed, by all standards, will it solve Acme’s problem(s)?
If you think deeply, you would realise a flaw in Acme’s thought process, their hypothesis. Their hypothesis is, if we implement AI chatbot, we can address the increased number of the customer reported incidents, without increased headcount.
What is wrong here?
The very first problem here is “retrofit thinking”. Acme Solutions is starting with a solution in mind and then trying to fit in with their current issue of growing customer base and incident reports. They formed their hypothesis while they already chose the solution; which is the fundamental mistake in hypothesis framing.
Instead, if they focus on the problem first, the solution may be different. If Acme acknowledges that if an increase in customer base is leading to increased incident reports, then there is a fundamental flaw in their service or product. There is something inherent in their service or product, which is causing incidents in the first place, and the best way to address it is to eliminate the cause of incidents.
Once Acme realises that solving flaw in product/service would eliminate or reduce the number of reported incidents, their need for increased headcount is likely to go away. When that happens, the need for AI chatbot will go away too.
In this case, if Acme were to go ahead with AI chatbot, their incident count will not reduce. However, they will be able to respond to more customers that are reporting incidents. Now, no matter how intelligent and efficient this chatbot is, once the customer is reporting an incident, they are already unhappy and frustrated with Acme’s product and service. Acme’s brand loyalty has already taken a hit. Solving customer problem wouldn’t restore it; at best, it will stop it from getting worse.
Conclusion
You will notice that AI risks do not necessarily start with the AI system or application itself. They would have already crept in, well before the solution building starts or deployment begins.
Therefore, correctly framing the hypothesis, validating it with the data, and supporting it with statistical analysis is highly essential. It should be your very first checkpoint in evaluating the risks of the AI solution.
Correctly framing the hypothesis, validating it with the data, and supporting it with statistical analysis is highly essential.
About the Author: I am many things packed inside one person: a serial entrepreneur, an award-winning published author, a prolific keynote speaker, a savvy business advisor, and an intense spiritual seeker. I write boldly, talk deeply, and mentor startups, empathetically.
If you liked this article, subscribe to my newsletter for more such articles and connect with me on LinkedIn.