Desert Survival Challenge — The Power of Teams

Aditya Agrawal
7 min readOct 30, 2021

--

The infamous Desert Survival Challenge has been applied to as a team-building activity various times. We put that to the test as we try and analyze the effectiveness of teams in improving decision-making.

What is the Desert Survival Challenge?

It is a simulation that places teams in a desolate location in a desert following a plane crash which leaves the team accessible to only 15 items. The teams are tasked to rank these items in order of importance.

The 15 items were:

1. Flashlight

2. Jackknife

3. Air map of the area

4. Plastic raincoat

5. Magnetic compass ​

6. Compress kit w/ gauze

7. .45-caliber pistol

8. Parachute (red, white)

9. Bottle of salt tablets

10. 1 qt. of water/person ​

11. Animals book ​

12. Sunglasses per person ​

13. 2 quarts of vodka

14. 1 topcoat per person

15. Cosmetic mirror

What do experts believe is the correct ranking?

This is the correct ranking in decreasing order of importance (Rank 1 = most important).

1. Cosmetic mirror

2. 1 topcoat per person

3. 1 qt. of water/person

4. Flashlight

5. Parachute (red, white)

6. Jackknife

7. Plastic raincoat

8. .45-caliber pistol

9. Sunglasses per person

10. Compress kit w/ gauze

11. Magnetic compass

12. Air map of the area

13. ​Animals book

14. 2 quarts of vodka

15. Bottle of salt tablets

If you are interested in learning more about the logic behind this ranking drop in a comment.

What are we adding to this challenge?

In the Critical Thinking and Collaboration course, the MQM cohort had a class on Power of Teams which used the desert survival challenge. The aim of the class was to analyze team performance and compare it to individual performance. Every individual was asked to complete this challenge and submit the confidence they had about their ranking. Later on, individuals were teamed up and asked to complete the same challenge but as a team. Considering that it is an in-lecture activity, each team was given the freedom to take their own approach to reach a conclusion, and teams were allocated randomly. This class was conducted by Professor Jack Soll.

Let us first talk about responses made individually to provide a brief idea about the performance. On average, an individual was 3.9 ranks off for every item. The error was calculated in terms of absolute difference in individuals’ ranking and the experts’ ranking.

For example, the biggest deviation is observed for cosmetic mirrors. Experts have ranked it as the most essential item, however, individuals have failed to recognize its importance. There are instances where the mirror was ranked as low as 15th. Experts reasoning to keep the cosmetic mirror as the most essential item was that in bright sunlight (conditions prevalent in the desert), the mirror can produce bright reflections which can be seen for several miles and it can be, thus, used for signaling. This is a case of anchoring, as people hold onto the first piece of information, i.e. the word ‘cosmetic’. They then treat this as a non-essential item instead of giving it a proper thought to understand its potential importance.

People were also tricked by the importance of salt. Salt seems like an essential electrolyte and was ranked highly by people in general, however, the experts ranked it to be the least important as it would rob your body of moisture. Dehydration is the primary reason for death in such scenarios, and one must do everything within their reach to preserve water.

Does confidence level impact accuracy?

There does not seem to be any correlation between how confident individuals were in their responses when compared to how accurate their answers were.

Do teams perform better than individuals?

Here we can observe that deviation in ranks assigned to individual items reduced in the submission made by teams. Baring a few items such as water and a book on desert animals, the majority of the items saw a huge reduction in deviation. An interesting insight is that of the case of water. Water is the first thing that comes to mind in regards to surviving in a desert. When performing in teams, its members often fall prey to the common knowledge effect, which is the over-importance placed on aspects that is available/known to every member. And the deviations of ranks for water dropped from about 2 to about 4. Overall, team performance was far superior to individual performance.

Additional Insight — Individuals when answering alone, tend to discount the existence of a group who are lost in the desert. When working in a team, personal anchoring to this fact tends to decrease. The average rank placed for a raincoat drops when working in a team, whereas the rank for 1 top/coat per person increases. Here the emphasis is on the quantity, with there being a total of 6–7 topcoats but only 1 raincoat. When working individually, people think of themselves only and simply compare 1 topcoat to 1 raincoat, but when working in teams, true facts are compared and better decisions are made.

Can this improved performance simply be explained by the concept of ‘wisdom of crowds? If we were to rank based on averaging out importance placed by each individual in the team, will the results be comparable to those after collaboration within teams?

The x-axis of this graph plots the average deviation per question had every team averaged out the importance placed by each individual in the team. The y-axis is the actual observed average deviation (with an average of ~2.9 ranks deviation per question). Each circle is a team and teams in blue are ones that performed better due to the collaboration. Discussion of ideas, conflicts between members, and all other aspects of collaboration contribute towards better decisions. Only a small percentage of teams would have been better off had they averaged out their individual perceptions. This is expected and can be explained by situations such as peer pressure, lack of diversity, and unwillingness to ‘rock the boat’.

How do team dynamics affect the effectiveness of decisions?

This graph plots out each member's cognitive conformity to the accuracy of teams submissions. The x-axis is the team number and each dot represents a team member. Dots closer to the X-axis represent people whose individual ranks were close to what the team ended up submitting (representing influence on the team). Two dots closer to each other represents a lack of cognitive diversity as they had a similar ranking to each other. A red-colored dot represents a low-accuracy team submission, whereas a green dot represents a high-accuracy team submission. If you concentrate on the red dotted teams, they have a lot of team members close to one another. This indicates a lack of cognitive diversity, which leads to low accuracy in decision making. These red-dotted submissions also are more common closer to the x-axis. This is indicative of the fact that when members with similar thoughts form a team, they tend not to challenge these thoughts. There is high conformity within the team and its performance suffers from the common knowledge effect. The green-dotted submissions (teams with better performance) have members that are well-spread out, which indicates differences in thoughts. These differences induce conflicts within the team, which are healthy towards taking better decisions.

Conclusion

Different teams take different approaches to reach a decision. Some teams may have averaged their individual rankings. Some may have reached a cohesive conclusion fairly quickly due to shared perspective or social influence. Some teams may have indulged in task conflict to reach a conclusion. These different approaches are affected by the composition within teams. The process of decision-making has significant impacts on the decision.

Drawbacks

  1. There is a lack of cognitive diversity in our dataset since the data only contains students from the MQM cohort. Simply by the nature and program content, MQM students are likely to be more analytical, thus reducing the cognitive diversity in our dataset.
  2. We do not have sufficient data to judge how individuals interacted within teams. This would have further strengthened our hypotheses.
  3. Since the experiment was conducted in the Critical Thinking class, it cannot be compared to a blind experiment. Subjects (consciously or unconsciously) were aware of this fact. This may have manipulated their team answers from what they would have answered had this only been a team-building exercise.
  4. In the case itself, there were two approaches that could’ve been applied while ranking the items. The group could have decided to stay and wait for help or decided to move towards the nearest shelter. While the case study prompted subjects towards choosing the former, rankings would’ve greatly differed from the expert rankings had a subject decided to move towards the nearest shelter instead.

Click on this link to view the Tableau dashboards in action

Thanks!

--

--