Growth Mindset: The Perils of a Good Research Story

Jay Lynch
Jay Lynch
Aug 17, 2018 · 12 min read

Psychology is filled with compelling stories about the human condition; fantastic tales of the unconscious factors that influence our behaviors and the hidden attributes that subtly shape our lives.

Consider several research stories that have captured the public imagination:

  • Reading words associated with the elderly cause us to walk slower (Bargh et al., 1996).
  • Striking certain poses cause people to instantly become more powerful (Carney et al., 2010).
  • Exposure to first class passengers increases flyer antisocial behaviors (aka, “air rage”) (DeCelles & Norton, 2016).

These stories, and countless others, make for satisfying and surprising explanations of our world. The worry, however, is that a good research story can tempt us into letting down our intellectual guard. And unfortunately, like many findings in the social sciences, additional research has revealed the tantalizing stories listed above are likely fantasy, and initial support for them has eroded under additional scrutiny.

And this is good.

The role of science, it is widely believed, is to help separate fiction from non-fiction when it comes to tales researchers tell. Early research studies are viewed as provisional rough drafts — full of hyperbole and whimsy — that are slowly and methodically revised, through subsequent experimentation and replication, to more accurately reflect reality.

Or at least that’s the story we’re led to believe.

But what if this popular view is itself merely a comforting fable? What if in many areas of research — in fields like psychology and education— rather than contributing to a growing novel that increasingly illuminates the truth, researchers are instead producing loosely related collections of short stories that fail to address the key claims of the original plot? And as a result, despite the steady accumulation of ‘positive findings’, the impressive body of research claimed to support a popular theory is fundamentally incapable of validating the veracity of its core claims.

In many popular areas of research, I would argue this description is more fact than fiction.

Setting the Stage

Consider the widely influential theory of growth mindset.

For those unfamiliar with the theory (get out from under that rock!), Stanford psychologist Carol Dweck and her colleagues have found that students differ in their views about malleability of intelligence — whether it is an attribute that is fixed and unchanging or able to be grown with effort. These differences in belief are claimed to meaningfully affect student achievement by influencing learner goals, responses to academic setbacks, and attributions of effort.

Dweck and colleagues have thus explored the possibility of changing student beliefs about intelligence as a way to improve academic success, investigating the educational impact of interventions designed to move students from a ‘fixed mindset’ to a ‘growth mindset’. And in response to claims of repeated experimental successes, growth mindset interventions have been widely adopted in U.S. schools and the idea of growth mindset has saturated the educational community.

The massive popularity of growth mindset theory in education is largely attributable to the fact that it makes for an extremely compelling and satisfying research story. It’s an underdog story about how belief in the value of “hard work and discipline contribute much more to school achievement than IQ does” (Dweck, 2007), about the power of simple changes in mindset to “narrow the achievement gap” (Rattan et al., 2015), and how subtle modifications in how children are praised can produce strikingly large improvements in achievement (see, Mueller & Dweck, 1998).

In contrast to this sanguine tale, however, a close look at the body of research offered in support of the efficacy of growth mindset interventions reveals a troubling absence of experimental coherence. In fact, it’s reasonable to question whether there is any clear theory under investigation in these studies at all.

The Experimental Tale of Growth Mindset

Just to be clear at the outset, I’m not taking a pro/con position on the efficacy of growth-mindset interventions. It’s certainly plausible that growth-mindset interventions have a meaningful learning impact for some students in certain learning situations.

Rather, I want to argue a different point: that the research agenda employed by growth mindset proponents has been inadequate to substantiate or contribute to any real understanding of the efficacy of growth mindset interventions. And as a result, the collection of research studies repeatedly offered by proponents to legitimate the effectiveness of growth mindset interventions do nothing of the sort.

To accomplish this goal we need to briefly review the history of growth mindset intervention research. This will obviously be an abridged tale, but it captures the key plot twists.

In the late 90’s researchers publish a finding that a simple intervention, a couple words praising students for ‘working hard’ rather than for ‘being smart,’ produces implausibly massive differences in learner performance on a non-school related intellectual task (Mueller & Dweck, 1998).

It’s a great story.

Roughly five years later, several small studies are published that investigate a related idea: whether teaching stereotyped students that intelligence is malleable improves performance. Findings are positive but highly uncertain. Researchers also find puzzling results that seem to conflict with the basic premise of mindset theory. For instance, they find that increased belief that intelligence is malleable is negatively associated with students’ later GPA (e.g., Aronson et al., 2002; Good et al., 2003).

Jump ahead a couple years and several more studies investigating growth mindset interventions appear. However, these studies focus on new outcomes, interventions, and sub-groups. More noisy data and smaller effects found. Researchers again brush aside findings that appear to run counter to the implied underlying theory (e.g., evidence suggesting mindset interventions don’t significantly help students who have a fixed mindset more than students with growth mindset) when telling their stories (e.g., Blackwell et al., 2007).

Based largely on this small number of disparate experimental studies, growth mindset gains traction in education circles, encouraged by Dweck and others who extol the power of changing students’ mindsets as a way to improve academic achievement (e.g., Dweck, 2007; Dweck, 2006).

Nearly a decade later, a new wave of growth mindset research is conducted. Again, the studies introduce new interventions, employ different analyses, and measure novel outcomes. For instance, in one case researchers find a benefit for growth mindset interventions only when combined with another intervention type. In another study, the authors take superior performance on the first 10 questions of a 34 question algebra test as evidence of growth mindset intervention efficacy even though overall performance differences showed no significant effect. Research findings are still super noisy and effects have become much smaller now. (e.g., Bettinger et al., 2016, Paunesku et al., 2015; Yeager et al., 2014; Yeager et al., 2016).

Despite these apparent successes, several efforts by outside research groups to replicate the findings of previous growth mindset studies are unsuccessful (e.g., Li & Bates, 2017; Rienzo et al., 2015) and there is growing criticism directed at mindset research for various statistical problems and questionable research practices.

In response to these challenges, growth mindset proponents feel compelled to respond to the scrutiny and appeal to the complexity of the underlying phenomenon. In particular, they suggest that growth mindset interventions likely only work under certain conditions, with highly-developed materials, and for certain types of students. And they also suggest that it is unlikely that non-experts can successfully deliver growth mindset interventions themselves.

This appeal to contextual factors and experimenter expertise mirrors the reaction of social psychologists who responded similarly when the legitimacy of priming research came under scrutiny several years ago. As the famed scientist Daniel Kahneman wrote:

priming effects are subtle and their design requires high-level skills. I am skeptical about replications by investigators new to priming research, who may not be attuned to the subtlety of the conditions under which priming effects are observed, or to the ease with which these effects can be undermined

Despite these appeals, things haven’t turned out so well for priming research.

This is pretty much where the growth mindset intervention story stands now.

Dude, Where’s My Evidence?

As noted above, growth mindset proponents have responded to the increasing skepticism by suggesting they simply need to conduct additional and larger studies to better understand the subtleties of their theory. But surely it’s reasonable to ask what prior studies have ostensibly revealed about the efficacy of growth mindset interventions. That is, how much do we really know after all the experimental work outlined above?

Or perhaps more to the point, what exactly is the theory these researchers claim to be so diligently refining and validating through their research activities?

This is a critical question because the desultory nature of the experimental analyses and outcomes found in previous studies is striking. In one study researchers report the effects of an 8-week malleability training on half-semester math grades, quickly brushing off evidence that the intervention doesn’t seem to help students with a fixed mindset more than those with a growth mindset, and in the next study they are reporting the effects of combining a single 45 minute online growth-mindset course with a completely different sense-of-purpose intervention, and isolating their analysis to at-risk students.

Each experiment involves wildly disparate choices in how data is collected, analyzed, and measured. Available studies are a dizzying hodgepodge of experiments that are all conceptually related but rely on a growth mindset hypothesis that is so vague and open-ended (e.g., “Growth-mindset interventions improve academic achievement.”) that the researchers are able to claim victory for any observed effect regardless of learning outcome, analyzed subgroup, intervention type, proposed mechanism, or measured impact. The result is that claims of statistical significance in most of these studies are essentially meaningless.

The growth mindset literature is basically a menagerie of underpowered exploratory studies masquerading as a cumulative and confirmatory research program.

And serious statistical and methodological concerns further diminish the luster of the growth mindset story. For instance, I analyzed the results reported in the half-dozen papers repeatedly cited as providing strong evidence of the efficacy of growth mindset interventions using a technique that estimates the statistical replicability of a set of studies. The results reveal a conservative estimate of true power (i.e., probability of finding an effect of the magnitude researchers report) across these studies of less than 35%. Given that nearly 100% of statistical tests in these papers were claimed to have reached significance, this suggests reported results have substantially capitalized on chance and/or employed questionable research practices. It also suggests that efforts to replicate these large effects are unlikely to be successful and some previous results may have been false positives.

This judgment is further bolstered by the existence of a strong negative relationship between the effect size and sample size in growth mindset intervention studies cited in the previous section (see Figure)— with the largest, and likely most accurate estimate, reporting an effect that hovers precariously near zero. Given that measures of effect size are theoretically independent of sample size (see, Cheung & Slavin, 2016), this relationship gives additional credence to the belief that the literature on growth mindset interventions is marred by publication bias and/or questionable research practices (for a similar critique of religious priming research, see, Van elk et al., 2015).

Despite these concerns, proponents continue to encourage the perception that there exists robust experimental support for the efficacy of growth mindset interventions.

When Storytelling Usurps Science

My intention is not to single out growth mindset. In fact, my critique isn’t really about growth mindset at all. Dweck and colleagues are simply doing what most social scientists do. Theories in the social sciences are typically so weak that they can only predict statistically noteworthy differences — irrespective of magnitude, mechanism, or form — which means corroboration of these theories mean very little.

Theories are so empirically promiscuous that researchers can find support for them in virtually any experimental data they obtain.

The eminent psychologist/methodologist Paul Meehl described this problem 50 years ago, and it’s eerie how closely the experimental tale of growth mindset (and many other areas of the social sciences) mirror the problematic behaviors of researchers active during Meehl’s time; behaviors that he eloquently described as involving

“zealous and clever investigator[s] slowly wending their way through a tenuous nomological network, performing a long series of related experiments which appear to the uncritical reader as a fine example of an ‘integrated research program,’ without ever once refuting or corroborating so much as a single strand of the network.” (Meehl, 1967)

Again, my criticism in this post should not be taken to imply that I think students’ mindsets don’t matter or that growth mindset interventions can’t impact student achievement. Rather, I am arguing that the evidence in support of growth mindset interventions is not as strong, the effects not as large, and our understanding not as complete as the theory’s proponents have suggested.

And I lament the decision of researchers and others who overstate the evidence in support of growth mindset interventions, for laundering the massive uncertainty endemic in what is essentially a collection of exploratory studies, and for misleading non-specialists and educators into thinking that the theory of growth mindset is a research success story rather than the early napkin scribbles describing a still fuzzy plot.

The consequence of all this researcher storytelling has been considerable time, effort, and money spent by schools and other organizations on various growth-mindset intervention programs that have now been acknowledged by Dweck herself as likely having been in vain.

Sometimes a research story really is too good to be true.

References

Aronson, J., Fried, C. B., & Good, C. (2002). Reducing the effects of stereotype threat on African American college students by shaping theories of intelligence. Journal of Experimental Social Psychology, 38(2), 113–125.

Bargh, J. A., Chen, M., & Burrows, L. (1996). Automaticity of social behavior: Direct effects of trait construct and stereotype activation on action. Journal of personality and social psychology, 71(2), 230.

Bettinger, E., Ludvigsen, S., Rege, M., Solli, I., & Yeager, D. (2016). Increasing Perseverance in Math: Evidence from a Field Experiment in Norway, 1–23.

Blackwell, L. S., Trzesniewski, K. H., & Dweck, C. S. (2007). Implicit theories of intelligence predict achievement across an adolescent transition: A longitudinal study and an intervention. Child Development, 78(1), 246–263.

Carney, D. R., Cuddy, A. J., & Yap, A. J. (2010). Power posing brief nonverbal displays affect neuroendocrine levels and risk tolerance. Psychological Science, 21(10), 1363–1368.

Cheung, C. K. & Slavin, R. E. (2016) How methodological features affect effect sizes in education. Educational Researcher, 45, 5, 283–292.

DeCelles, K. A., & Norton, M. I. (2016). Physical and situational inequality on airplanes predicts air rage. Proceedings of the National Academy of Sciences, 113(20), 5588–5591.

Dweck, C. S. (2006). Mindset: The new psychology of success. Random House Digital, Inc..

Dweck, C. S. (2007). Raising Smart Kids. Scientific American Mind, 37–43.

Gelman, A., & Loken, E. (2013). The garden of forking paths: Why multiple comparisons can be a problem, even when there is no “fishing expedition” or “p-hacking” and the research hypothesis was posited ahead of time. Department of Statistics, Columbia University.

Good, C., Aronson, J., & Inzlicht, M. (2003). Improving adolescents’ standardized test performance: An intervention to reduce the effects of stereotype threat. Journal of Applied Developmental Psychology, 24(6), 645–662.

Li, Y., & Bates, T. C. (2017). Does mindset affect children’s ability, school achievement, or response to challenge ? Three failures to replicate. SocArXiv Preprint. Retrieved from: https://osf.io/preprints/socarxiv/tsdwy/

Meehl, P. E. (1967). Theory-Testing in Psychology and Physics: A Methodological Paradox. Philosophy of Science, 34, 103–115.

Mueller, C., & Dweck, C. (1998). Praise for intelligence can undermine children’s motivation and performance. Journal of Personality and Social Psychology, 75(1), 33–52.

Paunesku, D., Walton, G. M., Romero, C., Smith, E. N., Yeager, D. S., & Dweck, C. S. (2015). Mind-Set Interventions Are a Scalable Treatment for Academic Underachievement. Psychological Science, 26(6), 784–793.

Rienzo, C., Rolfe, H., & Wilkinson, D. (2015). Changing Mindsets. National Institute of Economic and Social Research, Education Endowment Fund. , 1–51.

van Elk, M., Matzke, D., Gronau, Q. F., Guan, M., Vandekerckhove, J., & Wagenmakers, E.-J. (2015). Meta-analyses are no substitute for registered replications: a skeptical perspective on religious priming. Frontiers in Psychology, 6, 1365.

Yeager, D. S., Johnson, R., Spitzer, B. J., Trzesniewski, K. H., Powers, J., & Dweck, C. S. (2014). The far-reaching effects of believing people can change: implicit theories of personality shape stress, health, and achievement during adolescence. Journal of Personality and Social Psychology, 106(6), 867–884.

Yeager, D. S., Romero, C., Paunesku, D., Hulleman, C. S., Schneider, B., Hinojosa, C., … Dweck, C. S. (2016). Using design thinking to improve psychological interventions: The case of the growth mindset during the transition to high school. Journal of Educational Psychology, 108(3), 374–391.

Jay Lynch

Written by

Jay Lynch

I think about learning, a lot

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade