# Item Loyalty Measurement

**What is Item Loyalty:**

In marketing literature, there has been a long running debate regarding the definition of loyalty. It is understood that item loyalty is a measure of how a household/customer is biased towards a particular brand/item verses its alternatives, but there is not yet one fixed conclusive working definition of item loyalty and that what makes this field so intriguing.

Overall, loyalty is broadly quantified* (M. MELLENS, 1996)* into two structural parts as below

Attitudinal Loyalty:what a customer feels — a reflection of the emotional attachment that consumers feel for brands/items (usually measured using survey data)

Behavioral Loyalty:what a customer does — usually the repeated (purchasing) behavior based on the intrinsic feeling for the brand/item.

Conventionally, Household based loyalty has been looked at as a component leading to certain business advantages, such as reduced promotional costs, acquiring of new customers and scoping out of cross/up-sell opportunities for steadfast customers.

Within Assortment Planning, item loyalty feeds into critical shelf placement decisions i.e. item delete, expand, replace, etc. For instance, when making space for new product on the shelf, we would not want to drop a low selling yet a high loyalty item.

As part of Assortment Analytics team in Walmart Labs Bangalore we have used the transactional/scanner (POS) data for our exploration of the various techniques for measuring item loyalty. These loyalty scores can be enhanced further by the online web based transaction data coupled with surveys which are conducted for the said items.

For this purpose, we propose to work with the definition proposed by Jacoby and Kyner (Kyner, Brand Loyalty vs. Repeat Purchasing Behavior, 1973) which includes a set of six necessary and collectively sufficient conditions. We adhere to these conditions when adopting any loyalty measurement approach.

**Item Loyalty Calculation methods:**

The analysis starts with gathering of the POS data for the respective category post which we apply few data cleansing filters such as the minimum requisite daily transactions at the stores, minimum purchase frequency of households. Loyalty algorithms are then effected on this final data set with the output being the aggregated item loyalty scores as showcased in *Figure 4*.

High loyalty customers are always a dependable asset for any organization hence we might choose to bin the loyalty scores according to the customer group counts in the respective loyalty bins. As seen in *Figure 5*, the *loyalty > 90%* group might be awarded the highest weight-age to aggregate to a final item score since that’s the concentration of households Walmart would want to engage with further.

This post describes different methods of loyalty computation with their pros and cons.

**Approach I: Sequential Loyalty**

This technique monitors sequential purchases of a household to estimate the re-purchase probability of an item thus reflecting the loyalty of the household towards that item. These probabilities are then aggregated over all households to arrive at a consolidated item loyalty score.

Consider the purchase pattern above where a household might purchase the product Tide in the 1st, 3rd and 4th purchase thus implying a 50% re- purchase probability for Tide.

Although very convenient, yet many times it does not give much insight into the data, e.g. in a household purchase pattern: {A — B — A — B — A — B} both A and B will have a loyalty score of 0 as per the above method which is clearly misleading. Ideally the accuracy of the loyalty score greatly depends on the extent of capturing the discussed purchase paths of the respective households in the POS data

**Approach II: Loyal Household Probability (L — probability)**

Given that a household has bought a certain item, what is the probability they will never buy anything else except that one item the household favors, i.e. it measures 100% commitment towards an item.

The measure is really simple and easy to compute though it lacks the rigor to capture the hidden patterns in the transactional data. However, this can be used as a quick trial measure to get a direction of relative loyalty values of the items in question. This can be useful in categories where the repurchase rate is low (such as automobile or electronics category) but might be misleading in cases such as grocery.

**Approach III: Share of Requirements**

This method looks at the proportion of choices allocated to item by a household Hi, typically called the share of requirements for item by the household.

E.g.: *A household that makes 10 trips into the Yogurt category over the time period of the analysis and buys ‘XX Greek Yogurt’ in 8 of those trips would have 80% household loyalty to that ‘XX Greek Yogurt’ item*

This method is conceivably the simplest measure of loyalty for an item, swift to execute, scale and understandable to decision makers. It captures much of the previous works done on loyalty variables, being a unique case of the *Guadagni and Little* loyalty variable. Here loyalty is split among products indicating a share of loyalty amongst products purchased by a household (share of requirements) over time.

Loyalty in this method is always calculated at a substitutable item group level since this would strongly reflect the loyalty of a household amongst similar behaving items. Essentially we would not like to consider loyalty for say Plain Yogurt and Flavored yogurt in the same substitutable group since the purchasing intent might be different.

Another variant of this method takes a mildly altered form which takes into account the number of different items choices available in a substitutable group while calculating this proportion. This method can be extended to consider all available POS data including that without household information

**Approach IV: GL Probability (GL — probability)**

We use these methods introduced by *Guadagni and Little (1983)* and modify it to cater to our needs, this is more rigorous and computationally extensive than the previous approaches and attempts at reading between the lines so to speak. Here we define a loyalty variable as an exponentially weighted sum of past purchases of a Household, that will capture the bias of the households towards a particular product against others.

We summarize these loyalty values at item level by aggregating the **mean(GL1)** or **median(GL2) **at the household level to get the final GL based loyalty scores.

**Approach V: Information Entropy of purchase behavior**

Entropy is defined as the average amount of information produced by a probabilistic stochastic source of data. We use this concept to validate our data in order to see the switching tendency of a household between items.

Ideally, if a household is more loyal towards a particular item, item switching behavior will be very low thus resulting in a lower entropy and higher item loyalty. Here we consider entropy for set of households who have bought say item A at least once to be the households’ base favoring item A.

**Data and Results:**

We applied the above stated methods on scanner data (POS) of a region for a food category over a time period of one year. For the GL based method a random sample of stores were selected to reduce processing time and still make it a representative sample of the purchasing behavior of the households nationwide.

Defined below is a visual representation of the distribution of the market shares for the various items in the category.

As we can see, this is a standard positively skewed distribution which is expected as most of the brands have low market share and very few items have a relatively larger share.

However, none of the items really dominate the market as the higher market shares are only around 6%

The method L-probability was applied on Households that have made more than one purchase over the time period while the method GL-probability used households having more than 10 purchases (due to time complexity issues).

Both approaches — mean (GL1) and median (GL2) measures in GL method — are viable in different contexts, however, our preference is towards median measure due to the fact that the original loyalty variable is positively skewed including an outlier or two. Also, if we look at the distributions of the two GL-probabilities

We see the expected positively skewed distribution in GL2. Whereas GL1 has a more symmetric curve governed by the central limit theorem — this may help us in any statistical analysis we would want to do in future such as building confidence intervals or testing hypothesis.

While different approaches yield different results, validation is a practical challenge since this is unsupervised problem.

One way to assess the validity is based on business feedback. Since an analytic calculation has to work hand in hand with decision makers such as Category Merchant or Buyer, and their confidence and interpretability is important for material impact, their feedback can guide right approach for specific market or category. Alternatively, ensemble approach would aggregate the ranks as per the individual loyalty calculation method to mitigate biases arising from any single model.

We observed from the results that all such loyalty measures have significant correlation with the market share, which is expected as they are based on purchase patterns only. So, the market demographic will be essentially stored in the values.

Amongst them, sequential loyalty associates most with market share and L- probability is fairly in agreement with the former — probably due to the fact that both are combinations of some indicator variable based on the purchase frequencies. However, L — probability is likely to be an improvement over sequential probability as it is more general than just sequential purchases.

For GL1 and GL2, their association with sequential loyalty and with market share are both moderate — in a way they try to generalize sequential loyalty, since if one would make repeat purchases very often their GL1, GL2 score will be high but even if repeat purchases are scarce but a person generally sticks to a brand or two that will get reflected in these values and not in sequential probability.

Furthermore, let’s look at the scatter plots of these measures with market share so we can gain some new insights. These plots highlight items which are low in sale but that might carry higher loyalty values. Red points are items with less than 2 % market share and more than 80% loyalty. Few examples of these items are: Cranberry Juice, Pine Juice, Orange Smoothie.

The GL2 measure seems to be able to better classify this information. This again backs our preference for median based measure.

**Conclusion:**

We summarize the outcomes of the various loyalty measurement methods in the following table:

The study on the various loyalty methods has helped gather evidence that there is more information linked to the loyalty score which can be ascertained by multiple approaches. Nonetheless, sequential loyalty and L — probability is easy to calculate and interpret and hence can be used for an experimental summary or a quick check.

You can refer to more details about loyalty measurement by referring to the published papers as below:

*· A Consistent Loyalty Measure for Generalized Logit Models by Fred M. Feinberg and Gary J. Russell*

*· A logit model of brand choice calibrated on Scanner data by Peter M. Guadagni & John D.C. Little*

*· Evaluation of price elasticity and brand loyalty in milk products by Natsuki Sanoa, Syusuke Tamurab, Katsutoshi Yadac, Tomomichi Suzukia*

*· Semiparametric Multinomial Logit Models for Analyzing Consumer Choice Behavior by Kneib, Baumgartner, Steiner*

Authors:

Savio Fernandes, Senior Statistical Analyst — Walmart Labs Bangalore

Diptarka Saha , Statistical Analyst — Walmart Labs Bangalore

Ashish Gupta, Senior Manager — Walmart Labs Bangalore