Original Research: The Anchoring Effect in A Range of Plausible Anchors

The Curious Learner
Apr 7 · 10 min read

In this series on Original Research, I will be sharing about my findings from some of the mini-projects that I have carried out on my own.

The anchoring effect is a systematic cognitive bias committed by individuals, when they rely too heavily on an initial piece of information for making a subsequent judgment. This is especially pronounced when the individual does not have much knowledge in the subject matter that he/she is assessing, and end up getting influenced by any information that comes before the judgment.

One of the most popular examples is by Strack & Mussweiler (1997), who conducted a study asking participants to guess the age of Mahatma Gandhi when he died. But before asking for their estimates, the researchers exposed one group to a low anchor (“Did Mahatma Gandhi died before or after the age of 9 years old?”), and exposed another group to a high anchor (“Did Mahatma Gandhi died before or after the age of 140 years old?”). While it was impossible for both anchors to be the correct answer, they nonetheless had an effect on the participants, as the mean estimate from the low anchor group was 50, while the mean estimate from the high anchor group was 67.

The Mahatma Gandhi question used by Mussweiler & Strack (1997).

In trying to understand the effects of implausible anchors, Mussweiler & Strack (2001) repeated the study with plausible anchors as well, using 61 years old as the low anchor and 86 years old as as the high anchor. As it turned out, the plausible anchors still had an effect on the participants, with the low anchor group having a mean estimate of 63 and the high anchor group having a mean estimate of 70. However, the deviations from the plausible anchors were a lot less.

The research by Mussweiler & Strack got me wondering, is it possible to find an anchor that would result in a mean estimate with minimal deviation? What exactly happens within the range of plausible anchors? This was something nobody had explored and I was curious to find out.

How does anchoring vary in the range of plausible anchors?

Hypotheses

Extrapolating how the deviations reduced from the implausible to plausible anchors, it would seem natural to assume that the deviation of mean estimates from their respective anchors should continue to reduce until the high and low anchors converge at a point in the plausible range where there is little or no deviation.

Hypothesis 1: Mean estimate does not deviate at the mid-point of the plausible range.

However, since all the anchors in this range are considered plausible, it is also not unlikely that the amount of deviation from each anchor would not differ much.

Hypothesis 2: Deviations of mean estimates do not differ in the plausible range.

To find out which scenario occurs in real life, I designed a series of questions and administered them to participants through an online questionnaire. Participants received 5 Singapore dollars for completing the questionnaire.

Questionnaire

To replicate the type of question like the Mahatma Gandhi one, participants will need to have an idea of the question’s context, but not know the correct answer. I conducted a pretest with 20 different questions, allowing participants to freely make estimates without any anchors. The top 10 questions with the smallest variance in estimates were chosen for the main study.

The 10 questions used for the main study.

The anchors used for these 10 questions were determined by the means and standard deviations of the participants’ estimates in the pretest, which was what Mussweiler & Strack had done in their studies. A total of 5 anchors were used for each question, which included the pretest mean, +1.0SD, -1.0SD, +0.5SD and -0.5SD.

In the Mussweiler & Strack studies, a question of whether Mahatma Gandhi died before or after the anchor precedes the estimate. As I was experimenting with anchors in the plausible range, I had to consider the anchor as a possible estimate. Hence, besides the usual “more than” or “less than” options, I allowed participants to choose a third option which was “roughly equal to” the anchor that they were shown. Participants of course did not know that the numbers they saw were anchors, and they also did not know other participants were shown other numbers.

After making their estimates for each question, I also asked the participants whether they knew the right answer to the questions, and how likely they believed that the anchor they were shown was the right answer. The responses to these sub-questions were then used to draw further insights later on.

Example of an entire question.

Results

541 participants were recruited in total, through mass sharing of the questionnaire link as well as snowball sampling. The gender ratio was roughly equal, but the age group of the participants was skewed towards 21 to 30 years old. Most of the participants were Singaporeans, who had a degree and were working adults.

Finding 1: Mean estimates from all anchors groups were tending towards the Pretest Mean.

To compare all 10 questions together, the units had to be standardised, and to peg the group means to their respective anchors, the following formula was used:

(Anchor Group Mean — Pretest Mean) ÷ Pretest SD

This resulted in the following chart, where ‘0’ on the y-axis represents the pretest mean, while ‘1’ and ‘-1’ represent the pretest SD. Groups 1 to 5 on the x-axis represent each anchor from -1.0SD to +1.0SD, with the pretest mean in the middle at Group 3.

Mean estimates of each anchor group standardised for all 10 questions.

It is immediately noticeable from the chart that the mean estimates for all 10 questions were following a similar pattern, with the exception of Question 1 (Dinner) and Questions 9 (Banana Tree). Mean estimates from groups with +1.0SD and -1.0SD as their anchors deviated the most, while the group with the pretest mean anchor seemed to have deviated the least. To make it easier to see, I aggregated the means of the 10 questions into a single data point for each anchor group.

Aggregate of mean estimates for each anchor group. Red dots indicate points where the mean estimates should appear if there were no deviations.

From the chart of the aggregated means, it becomes apparent that the results seem to be reflecting the scenario of Hypothesis 1, where there is almost no deviation of the mean estimate at the mid-point of the plausible range. On the other hand, mean estimates from the other anchor groups seem to be tending towards the pretest mean.

This finding is rather interesting, as the pretest mean is by no means the correct answer, nor would the participants of the main study know what the mean of the pretest was. This seems to suggest that the mean from anchor-free estimates make neutral anchors, where overestimates and underestimates resulting from that anchor cancel out each other and allow the mean to converge back at the original anchor.

However, looking at mean estimates does not give the full picture of how estimates deviate from anchors. Hence, I also looked at how individual estimates deviated.

Finding 2: The larger the anchor value, the greater the anchor deviations.

To find out how much individuals deviate from their given anchors, I calculated the absolute difference between an individual’s estimate and its respective anchor value, using the following formula:

| individual estimate — anchor value |

For example, if the estimates of 5 participants exposed to the anchor 80 were 78, 79, 81, 81 and 83, the calculation of the mean anchor deviation would be (|78–80| + |79–80| + |81–90| + |81–80| + |83–80|) ÷ 5, which gives an answer of 1.6. This differs with the deviation of the mean estimate in that the deviation is calculated before computing the mean.

Once again, to compare all 10 questions together, the units had to be standardised, using the following formula:

Anchor Group Mean ÷ Sample Mean

This resulted in the following chart, where ‘1’ on the y-axis represents the mean anchor deviation of the entire sample.

Mean anchor deviations of each anchor group standardised for all 10 questions.

Amazingly, the mean anchor deviations for all 10 questions were following almost the same pattern, where the anchors with smaller values (Groups 1 and 2) had lesser deviations, while the anchors with larger values (Groups 4 and 5) had greater deviations. We can also see that in comparison to the sample mean (‘1’ on the y-axis), Groups 1 and 2 were generally below the mean while Groups 4 and 5 were mostly above the mean. To make it easier to see, I aggregated the means of the 10 questions into a single data point for each anchor group.

Aggregate of mean anchor deviations for each anchor group.

From the chart of the aggregated means, it appears that the deviations were increasing at an exponential rate, where larger anchors disproportionately resulted in greater anchor deviations. This seems to resonate with the findings of Wong & Kwong (2000), who found that anchors with a larger absolute value (e.g. 7300 m) induced a greater numerical estimate than anchors with a smaller absolute value (e.g. 7.3 km), despite them being the same semantically. It is possible that participants were primed to adjust more widely when exposed to larger anchors, resulting in the greater deviations.

Besides comparing deviations to the anchors that participants were exposed to, I also examined how trust in the anchor affected the deviations.

Finding 3: The more likely participants think an anchor is the answer, the smaller the anchor deviations.

If you recall me asking the participants how likely they believed that the anchor they were shown was the right answer, this is where the responses for that sub-question comes in. While it seems commonsensical that participants who believed that their anchor was the right answer would deviate less from it, no one has explicitly shown this before. Hence, I decided to compare the anchor deviations based on how likely they believed that their anchor was the right answer.

Like what I did in Finding 2, to compare all 10 questions together, the units had to be standardised, using the following formula:

Likelihood Group Mean ÷ Sample Mean

This resulted in the following chart, where ‘1’ on the y-axis represents the mean anchor deviation of the entire sample. Groups 1 to 7 on the x-axis represent the Likelihood Group from ‘Very Unlikely’ to ‘Very Likely’.

Mean anchor deviations of each likelihood group standardised for all 10 questions.

Like the charts in Findings 1 and 2, what we see in Finding 3 is again the same pattern occurring for all 10 questions, regardless of what the question was. Participants who found their anchors very unlikely to be the right answer had the greatest amount of deviation, while those who thought their anchors were the right answer had the least deviation. To make it easier to see, I aggregated the means of the 10 questions into a single data point for each likelihood group.

Aggregate of mean anchor deviations for each likelihod group.

From the chart of aggregated means, an S-curve is observed where the differences towards the extremes seem to be diminishing. This probably suggests that the polarisation between participants who think their anchor is unlikely vs those who think their anchor is likely is quite distinct. A Pearson’s correlation test between Anchor Deviation and Likelihood was conducted, and negative correlations were significant for all 10 questions at p < .001.

Conclusion

The findings from this mini-project provided some new insights that have not been discussed before in past anchoring research. In summary, anchoring within the plausible range is actually not homogeneous. There is still a tendency for the mean estimates to move towards a central point, which seems to be the pretest mean in this case. This seems to suggest that despite not knowing the correct answer, there is a point where overestimations and underestimations are roughly equal, causing the mean estimate to converge with the anchor. In understanding how individuals deviate from anchors, the value of the anchor actually seem to play a part through a priming effect, resonating with the findings of Wong & Kwong (2000). Finally, while it is not a huge surprise, believing that an anchor is the right answer will certainly result in smaller deviations.

This research has been presented at the 35th Annual Conference of the Society of Judgment and Decision Making in 2014, in Long Beach, California.

References

  • Mussweiler, T., & Strack, F. (2001). Considering the impossible: Explaining the effects of implausible anchors. Social Cognition, 19(2), 145–160.
  • Strack, F., & Mussweiler, T. (1997). Explaining the enigmatic anchoring effect: Mechanisms of selective accessibility. Journal of personality and social psychology, 73(3), 437.
  • Wong, K. F. E., & Kwong, J. Y. Y. (2000). Is 7300 m equal to 7.3 km? Same semantics but different anchoring effects. Organizational Behavior and Human Decision Processes, 82(2), 314–333.

The Curious Learner

Written by

Knowledge Sharing on Science, Social Science and Data Science.