Business applications of the Scientific Method

Aman Mundra
5 min readDec 2, 2019

--

For the academic of the physical sciences, a single process has been ingrained in one’s mind since the dawn of their studies.

The scientific method.

It is introduced to students as the first step towards rationalization; a logical thought process behind reasoning, learning, and ultimately understanding.

The tenants of the scientific method are introduced to most students during their formative years of elementary education, reinforced to the extent that its steps become second nature to the academic.

Brainstorm an underlying question; form a premise or build a hypothesis that explains an observed phenomenon. Create a test plan, ideally altering a single condition, or independent variable, and in executing, gauge the effects the change elicits on other data points, termed independent variables. Review and analyze the results; iterate as needed, until insights are garnered that shed light supporting or negating the initial premise.

As product managers, far too often do we, as a discipline, forget the intrinsic role such a process plays in our responsibilities. Whether ideating a new feature set, understanding client data, or employing A|B testing to garner user feedback, all of these activities intrinsically necessitate and unequivocally employ the scientific method.

Take the latter use case as an example. In partnering with a digital communications company, a financial institution aimed to build a more engaging monthly statement for end consumers. The underlying value proposition from the communications company posited that a higher user engagement would result in both a faster payment cycle, and increased consumer upsell opportunities as a byproduct.

With KPI’s established, and a clear mission in hand, designers dashed off to create varying versions of an engaging UI for email communications; HTML developers began their integration of the templates; back-end engineers connected and exposed the valuable consumer engagement data which would ultimately be transformed into measurable KPI’s.

Three months later, open rates and click rates increased; consumer engagement had increased by 12% and ultimately sales had increased. Happy client, successful project. Case closed.

In reviewing the digital communication firm’s approach relevant to the scientific process, a few missteps come to mind that, if corrected, would help better serve their business:

(1) Test isolated independent variables

The scientific method is rigid. An experiment should intentionally only manipulate a single entity, termed the ‘independent variable’. The resulting data points that can alter as a result of tweaking this independent variable are termed as ‘dependent variables’. This exclusivity in manipulating only a single, independent variable is important due to the fact that any resulting effects can be attributable and tied to that single entity’s change.

In our real-world example, the designer completely transformed the financial institution’s communication. Subject lines were altered, color schemas were changed, responsive HTML was employed, verbiage was modified, and content was revised. Amidst so many far-reaching changes, how can one understand which elements of change were critical to drive the increased engagement? Perhaps a single component was most responsible for the improved results? Or maybe certain elements of the design actually detracted from the desired end-goal. Without isolating these changes as independent variables through a series of iterative design experiments, it is impossible to discern beneficial changes from harmful ones.

(2) Include control entities

Every project needs a baseline to be compared to. One of the test cases in a properly designed experiment includes reproducing previously experienced results, to ensure there is a proper baseline from which the alterations can be compared to. If a major retailer of electronics goods changed its marketing strategy from 2007 to 2008, only to find consumer spending decreased significantly, were the results truly attributable to the change in marketing communications, or were they moreso a resultant of altered consumer spending habits as a result of changing economic conditions?

In the zeal to create an actionable campaign, the communications firm failed to maintain a control. Had they included previous marketing communications in their distribution as another design in their A|B test, then erroneous conditions, such as a more fortuitous economic landscape, could have been negated. Including a control subject ultimately strengthens any business’s thesis and underlying value proposition; in absentia of a control, it becomes difficult to discern the success of any initiative. Our communication firm’s mission statement, asserting that enhanced customer communications lead to an increased ROI for businesses, only succeeds when it has tangible, data-driven evidence to support this notion in lieu of all externalities.

(3) Measure statistical significance

In the scientific method, one often establishes a null hypothesis, which in its simplest form, is the antithesis to the prototypical hypothesis. For our communications firm, the null hypothesis would posit that enhanced communications would have no effect on consumer engagement, payment cycles, and the other consumer-behavior related KPI’s of the project.

In reviewing the results of the project, we saw an uptick in the data metrics that comprise consumer engagement. But were these results statistically significant? That is to say, to what degree did the improved results differ from normative variance year over year in consumer engagement? And to what quantitative degree does a deviation from the null hypothesis comprise validation of a hypothesis? In such use-cases, a P-value test, which determines the probability of experiencing a result in relation to the null hypothesis, can help provide statistical validation.

In today’s business environment, there is a plethora of data, but of a lack of personnel to validate meaningful outcomes with this data. Applying basic statistical tests to validate outcomes becomes increasingly important; had our communications firm employed P-Value tests to discern the degree to which their changes differed from typical variance in consumer behavior, they would strengthen their underlying value proposition to their clients.

Data-driven conclusions drive replicable models of success for businesses.

Assuming the steps above have been followed, a company should now have insights into discrete, actionable changes that can help drive desired outcomes. In the case of our communications firm, specific components of the enhanced communication should have been identified as individual drivers that contribute to increased engagement.

Having identified these isolated components, there is now empirical evidence to support why these changes drive business value. This has a large and ever-cascading effect throughout the organization; the aspects of the design that were shown to truly increase engagement can be isolated and used in a discrete fashion. They can be used to help aid sales efforts with a more compelling CX journey; they can decrease the overhead and turnaround times on prototype design. For the communications company, there are now data-proven changes that can help drive a replicable model of success across existing and new client bases.

In today’s ever-evolving business landscape involving increasingly process oriented practices, the scientific method stands out as the rare modicum of process that should be rigorously implemented throughout organizations.

Technology companies, as they evolve, adhere to strict Agile processes, meticulous steps of SDLC, and in many cases, processes for processes sake. But the scientific method is not this. This thought process holds us accountable to gauge the success of our business hypotheses; it demands we rethink and redefine a premise until the data driven conclusion is truly unmalleable, unalienable, and undeniable.

The scientific method represents the underpinnings of any successful product manager, even if it is not a mainstay of a traditional B-school curriculum. As a product manager, I am a student of science; and as a student of science, I am nothing, without the learnings of the scientific method.

--

--