Five Essential Steps in order to Measure Success — Successfully!

Applications of research design for Sales & Marketing

Erik Prinz Grunde
Swedbank AI
Published in
12 min readMar 26, 2020

--

After an event not turning out the way you wanted, have you ever retrospectively mused to yourself: ‘if only I had worn my lucky sweater/jacket/underwear, I would have nailed that interview/presentation/first date’?

The above line of reasoning is an example of a mixture of at least three cognitive biases that most of us occasionally are guilty of, namely: 1) assigning unreasonably large significance to (probably) insignificant factors; 2) misjudging the causal effect that any one factor has on any given outcome, and 3) to look at events in isolation rather than as results of preceding events and subsequently misjudging the importance of those preceding events.

Most of us can probably relate on a personal level and admit to the occasional speciously drawn conclusion: as human beings, we like to see causal links where there are none, and we also like to simplify explanations of our perceived reality. Furthermore, we all like to think that we are aware of these types of logical fallacies and narrow reasoning, and that they as a result are restricted to our personal reflections. They certainly do not belong in our professional life. If you think about it however: is this truly the case? Can you remember examples in your line of work when similar patterns of thinking occur, or worse: have been institutionalized within your organization?

If you worked in an analytical position within Sales & Marketing for any considerable amount of time, chances are that you have come across statements similar to any of the following examples:

· The sales reps’ onsite presence is essential to the inflow of new contracts.

· We should focus our marketing efforts for this product on targeting young males, because that is our most profitable customer segment.

· Our main competitor has lowered the price for their product, and we can observe a substantial outflow of customers to them. Therefore, we must lower our price in order to remain competitive.

And the list goes on.

In isolation, any of the above statements can be grounded in a solid understanding of the business’ operations and on the surface seem like correct reasoning: x therefore y. Scratch that very same surface however, and questions start to arise. Exactly how are the sales reps’ presence attributed to sales? Would young males get your product despite your marketing efforts? Is price the main driver of your customer outflow, and is the pattern the same across your whole customer base, space and time?

The recurring main issue with statements like the above is that they often have not controlled for alternative explanations other than the causal narrative that they are inherently expressing, and in a way, this makes the statements no different from the initial example that gives full credit to your lucky sweater for an aced exam. One reason for the prevalence of similar statements is usually a result of a lack of robust research design that would aim to verify or challenge such claims, basically: the methodology is missing to support the conclusion(s) made. So, with this in mind, how and what can you do to rectify this sub-optimal state of affairs?

In this article, the first in a series discussing how to efficiently apply research design in a Sales & Marketing context, I examine the five initial (and essential!) steps that your organization should take to ensure that you will be on your way to successfully measuring success.

1. Ensure analytical capacity

Data assets, optimal utilization of these assets, and the inherent value of insights that can be derived from analyzing data have become critical for the modern company’s ability to succeed. Before embarking on practical applications of experimental design, a good advice is therefore to first ensure that your organization has the analytical capacity in place for collecting, storing and analyzing data. This capacity applies across infrastructure, data quality, staffing, organizational setup, etc. Whereas this step might seem obvious, it is arguably the fundamental key in order to guarantee success in your future measurement endeavors.

At Swedbank, the department ‘Analytics & AI’ was established in 2016 in order to ensure and streamline analytical capacity covering several areas, both across the company and in terms of widening competency. Today, the department is made up of a diverse, international team including data scientists, analysts, and APO’s with a wide variety of backgrounds, competencies, and technological know-how. Since the department’s inception, we have promoted a data-driven mindset and supported the whole company in order to establish an analytical center of excellence. In tandem with implementing new ways of working, the establishment of ‘competence domains’ and research within for instance anomaly detection, we have through joint efforts succeeded in creating a ‘brand awareness’ internally, and implicitly been able to change the culture of the organization towards being more data-driven.

This step is vital and in the case of Analytics & AI, we are still learning, and room for improvement always exists. However, without the aforementioned efforts in the last years we would not have been able to realize the insights inherent in the data assets of one of the Nordic’s largest financial actors that we have done and continue to do.

2. Define the ‘research question’ (or questions)

Following the above initial measure, many companies may start building dashboards, reports and other similar objects out of existing data. While interesting to look at and potentially useful for displaying information inherent in your data, this approach runs the risk of barely increasing the quantity and accessibility of descriptive data. Worse, interpreting the figures might even become disparate across the organization. This approach to data analytics tends to result in a reactive culture and a habit of creating reports just for the sake of creating them. Getting people to read the reports in an enlightened manner and more importantly, changing their daily MO from the insights they provide is a different matter entirely.

One way to start addressing these problems is by letting stakeholders agree on what exactly they want to ask the data, i.e. defining a ‘research question’ (or questions). This approach provides several advantages: 1) it gives you an idea of prioritization in terms of what you as an organization want to find out; 2) provides stakeholders with a sense of ownership in terms of the analytics produced further down the line; 3) establishes a clarity of purpose; 4) guarantees an analytical approach to the data output rather than a descriptive one, and finally; 5) will guide analysts toward the choice of methodology to apply (see step 3 below).

If well executed, this approach can also be a precious time-saver in terms of identifying how to approach the analysis, and the process can furthermore support identification of previous research available internally which can be used as a starting point for any new inquiry. Keep in mind however, that it is not enough to just ask a question — the question ideally needs to be clear, concise and researchable. In essence, this translates into the question being formulated in a way that allows it to be broken down into researchable, measurable and operationalizable parts.

Discussing these ‘research questions’ together with relevant parties often lead to increased clarity in terms of scope, delimitation and operationalization. Consider making it an iterative process, in which you together with stakeholders aim to distill the fundamental components of the question and insist on further clarification until there is no room for alternative interpretations or misconceptions.

Many of the failure stories that I have witnessed in my career as an analyst usually comes down to requests that have had a vaguely defined scope, unclear definitions, problematic measurements or operationalization, etc. Nowadays, I insist on the approach described above in order to minimize the aforementioned risks, ensure future value, and maximize ROI in terms of the time the analytical team will spend working on the assignment in question.

3. Design your experiment

Once you have a clearly defined question (or questions) that you want to investigate, you can start designing the research necessary to answer that question. The research design should serve as a guideline in terms of what data to collect and how to analyze it, in other words, outlining how to answer your question. Different questions lead themselves to different types of research design, which is another reason why the previous step is so useful, even if it might seem out of place in a business setting.

There are several different types (and sub-types) of research designs and also different ways of grouping them. Some examples include: comparative, longitudinal, case study, experimental, etc. These are in turn linked to specific methods of analysis, which can be quantitative (e.g. statistical analysis and modelling, etc.) or qualitative (usually focused on interviews, observational studies, etc.).

In terms of robustness and trustworthiness, experimental design is considered the benchmark of assessing causality. Traditionally associated with the scientific method and the natural sciences, many people probably associate the word ‘experiment’ with labs, white coats and controlled trials for measuring the efficiency of medical treatments. With the advent of big data and increased computing power this method has seen a rapid increase in terms of practical application in the social sciences as well; including for Sales & Marketing departments. I will therefore focus on this particular type of research design briefly here, since I believe this is where most companies can find a lot of information initially. However, let the record show that there are several other feasible options available.

Problems with the experimental design approach in a social context non-withstanding (for instance connected to issues of measurement validity), the possibilities enabled through experimental design and control groups are substantial if used correctly. Applying control groups can be an invaluable tool for assessing the effect of a marketing initiative, campaign or communication strategy, and will give a company the possibility to assess what works and what does not, and therefore create feedback loops for the continued development of sales and marketing strategies (see next step). At its core, the purpose of applying control groups in marketing is the same as in medicine or any other field, namely to assess whether a certain ‘treatment’ works — which in marketing could be a campaign, marketing strategy, communication channel, etc.

In a future chapter of this article series, I will explore this approach in even greater detail and how to go about constructing an experiment in practice, but for now — trust me when I say (write): if you have not utilized this approach before, the amount of insights possible to discover are vast indeed.

By continuously applying control groups and through robust design, we have been able to assess the effect not only for individual campaigns but have also been able to isolate effects of, for instance: communication, tonality and applied models on various KPI’s. Furthermore, this has in turn enabled us to quantify the effects in terms of each initiative’s contribution to ROMI (return on marketing investment). These are extremely valuable insights, not only because we then know what contributed to what in terms of profit, but also because it provides information on what works and what needs improvement. Which leads us to the next step…

4. Create feedback loops

What is a feedback loop I hear you asking? It is basically a fancy way of saying that the result of something serves as an input to your next iteration of that same thing. This is the step in which the value of your prior efforts’ (the earlier steps) are realized, and the power of data should become evident to you.

Recall the example used above, in which several different effects are isolated by applying experimental design to a campaign by setting up various target and control groups. Let’s imagine that the result shows us a positive ROI. Many companies may interpret this as evidence of the campaign being a success, and copy-paste the setup for its next iteration. This is without considering the risk you run if you are not using control groups in the first place of attributing sales during a campaign to the campaign, when customers would have purchased the product anyway!

If you want to optimize the campaign though, copy-paste is rarely enough. Your data will give you clues of improvement, and your feedback loops are methods of attaining them. Continuing our example, perhaps our numbers show us that the model applied for customer selection in the campaign contributed to very high shares of ROI, but the communication did not. Furthermore, treatment groups for the tonality is even showing negative figures for certain segments. All this information are cues on what efforts need to be done for the next iteration of the campaign! In this case putting more efforts on retraining the model might be less necessary than breaking out segments with negative ROI, creating new treatment groups for them and test something new in terms of tonality, which might even call for an exploratory qualitative study to answer what tonality would appeal to this segment, and so on.

Don’t forget that the KPI’s that you are tracking are also part of these feedback loops! Sales & Marketing departments conventionally closely monitor measures connected to reach, sales and profitability, such as: CTR, CVR, CPA, PPC, etc. These are valuable measures but give you a rather one-sided picture. What about the emotional resonance that your campaign has created with your customers, for example? Consider including measures that will catch other aspects, like opt-out ratios, channel transformation, or customer satisfaction to name a few. If for instance a specific treatment group can be considered a success story in terms of your conventional marketing metrics, but simultaneously show significantly higher ratios of opt-outs, this is a valuable insight caught by your feedback loop.

With a few feedback loops similar to the one’s described above in place you will also be able to assess the validity and reliability of your Sales & Marketing activities. Well, to some extent. It is also necessary to…

5. Keep on testing

Let’s face it: our social reality change over time and space. It is therefore important to remember that the result of your experiment is applicable to a particular population, in a particular context, at a particular point in time and space. It does not mean that it is generalizable to the whole population (i.e. your total potential customer pool), and that the effect that you have observed is objectively true and constant for all time.

It is therefore important to challenge your previous findings from time to time. Just think about which customer segments are present at which digital channels and how it looked like five years ago. This is the thing about the social sciences: facts are fluctuating on a scale rather than being objectively true or false and constant over time. Whereas gravity will be gravity (admittedly more or less, but you get my point), which customers will buy your product and why they do so will be dependent on context. Furthermore, there might be a lagged effect between your observed variables and externalities that intervene in the relationship you are trying to assess.

A great deal of patience is therefore needed when assessing the effects of your experiments. A variant on the traditional experimental design, A/B-testing can for instance be an effective tool for assessing what treatment out of a number (conventionally two) of treatments that work best in the context of your Sales & Marketing activity. In my experience however, it is commonly utilized in a sub-optimal manner. This method also sometimes goes by the name of ‘Champion/Challenger testing’, and that name itself gives us a clue to the interpretative danger inherent in this approach.

The first danger is to read the results of an A/B-test in absolute terms. For instance, if treatment A resulted in 1.3% CVR and treatment B in 0.9% CVR, an instinctual conclusion might be that we continue treatment A and abandon treatment B. Sounds legit, right? Well, you must first examine if the observed difference is statistically significant and not just a result of chance. There are several online tools that can help you with this, but it will also help to have someone with basic statistical knowledge on your team. Depending on the nature of the action you are measuring, you also might want to take incremental changes on your output metric(s) over time into account in your analysis.

Even if the result is statistically significant however, does that mean that we should abandon treatment B forever? Unless the effect is spectacular, not necessarily — since the performance of both A and B can be due to aspects you have not yet considered or controlled for in your design!

This is also why I am not a big fan of the name ‘Champion/Challenger test’ — because it is a sports allegory that gives you the illusion that the test is the ONLY and FINAL game to be played, whereas you are better off analyzing it like it was a series of games between the alternatives: similar to how a team’s performance fluctuates during the course of a season, the effect of your treatments might do too.

The point I am trying to make is that you should not let the results of your experiments make you complacent and over-confident in that the result demonstrated is an absolute truth, but rather stay vigilant and interpret the results as indications. By doing so, you are also making your feedback loops more robust and ensuring that they provide valid and reliable value for your business!

These five steps will have you well on your way to start assessing which of your Sales & Marketing activities are a resounding success, what factors therein to improve, and in the long-run hopefully make the organization’s culture more proficient with regards to data-driven decision making.

In the following articles, I will examine in some of the aspects of these steps in further detail and share Analytics & AI’s journey in unlocking the value of our feedback loops and continuously improving them.

Join us, it will be an exciting ride!

--

--

Erik Prinz Grunde
Swedbank AI

Senior Business Analyst at Analytics & AI @ Swedbank