The Fallacy of Anchoring Bias

Rachel Butt
The Refresh
Published in
4 min readNov 20, 2015

In Dan Ariely’s Predictably Irrational, he found that students overwhelmingly relied on shortcuts to predict values in an auction. This effect, known as anchoring bias, is present in our everyday lives. Marketers are strategic when they position their products, as evident in The Economist’s subscription options (e.g. digital-only vs. digital plus print vs. print only). In my experiment, I decided to further test anchoring bias in the context of online wine purchasing. And to my surprise, the results seem to dismiss Mr. Ariely’s claim.

How it works:

I decided to focus on wine as it is a fairly gender-neutral product, and I deliberately created an imaginary product without reference to any brand names to ensure it is clear of existing biases. The wine’s description is as follows: “Imagine a restaurant located in Soho, New York City is considering offering the wine in the photo above as its house wine. The wine is a 2012 Cabernet Sauvignon from a winery in Napa Valley, California. The wine is a medium- bodied red, punctuated with notes of blueberry and blackberry, with aromas of ripe plum and a touch of vanilla.”

For the first survey, I asked the participants to write down the last two digits of their phone numbers (random anchor). The participants have to enter those digits again to reinforce the anchoring effect (I told them it was to ensure accuracy of their entry). I then asked them to select a price range indicating their willingness to pay for a bottle of wine based on the following scenario: The participants then have to enter the maximum price they are willing to pay for the wine. I predicted the random anchor to influence participants’ buying decisions, where higher numbers would lead to higher prices.

On the second page, I provided a recent story on price hikes of local wines as a result of the current drought in California (related anchor). I asked participants to select a pricing range in the hopes of seeing a higher willingness to pay. I also asked the participants to enter the maximum price they would be willing to pay three years from now to test if the anchor influenced their future expectation on wine prices. I hypothesized that the related anchor has a stronger effect than a random one, as I have purposely provided biased information that was likely to “nudge” the survey participants toward valuing the wine at a higher price range.

The second survey is for the control group, where participants made their value judgments purely on the provided wine information. These participants are not subject to any anchors, which would provide statistical comparison to those with potential anchoring effects.

What happened:

I received 69 responses for my anchor test group and 77 responses from my control group.

My analysis indicated that the random anchor did not play a role in the participants’ value judgments.

Only 34% of the anchor test group participants responded to the question on the wine pricing range by choosing a number that was within the range of their cell phone number ranking. This might have occurred either because I’ve set too large of a range in the wine pricing options provided, or because participants became skeptical when asked to provide their phone number twice.

The related anchor, on the other hand, displayed semi-strong evidence that it raised participants’ pricing estimates for the wine. I found that only 44% of the respondents increased their range of prices willing to pay in the future, while 51% said they were willing to pay within the same pricing range.

The effect was even stronger when I asked participants to estimate for a longer horizon (3 years from now), which is in line with my hypothesis.

Nearly 3 out of 5 respondents (approx. 60%) are willing to pay a higher maximum price for the wine after reading the article.

Unfortunately, my results on the effectiveness of the anchor are inconclusive. I found that the control group chose similar answers for both their price range and maximum price willing to pay for the wine. I also found that, similarly to the anchor test group, the control group also chose a higher value for their willingness to pay for the wine in the future scenario.

Taking all information into account, only the related anchor played an important role in shaping the participants’ value judgments. I would need to conduct additional testing with a much larger and more diverse sample size to confirm my theory, most preferably in a closed environment (so that participants won’t be distracted by online entertainment).

To make it better…

I could have asked for a participant’s income levels in order to help us determine the potential impact income has on a participant’s willingness to pay for a bottle of wine. However, I chose to not include such question due to privacy concerns. The survey population in the control group is also more evenly distributed than that in the anchor group. The control group was comprised of a diverse group of professionals, students or fresh graduates. I do not know if the participants in both groups have knowledge or awareness of anchoring bias, which might have influenced their decision-making process. This might appear to be odd to suspecting participants, and with my detailed description of the red wine, I might have “diluted” the effect of the random anchor by bringing the participants back to their rational state of mind.

--

--

Rachel Butt
The Refresh

New York-based business journalist who’s previously written for Bloomberg News, The News & Observer, and SCMP. Big fan of boxing, cats and crime novels.