Gladwell’s “Talking to Strangers” explains some of the failures in drug development

James Peyer
8 min readSep 16, 2019

--

Malcolm Gladwell’s new book, “Talking to Strangers,” outlines an important continuum in how we interact with other humans. While listening [1] to Gladwell unravel and support his core thesis, I was struck by a similarity between the set of three characteristics that Gladwell identifies as the challenges we face in interacting with others, and three challenges I see in the world of biotechnology and drug development. I felt like the issues in biotech are similar enough to be relevant but different enough to be worth a separate discussion through the lens Gladwell provides.

First, let’s start with the sort of 30,000 foot overview of Gladwell’s thesis. I will have to ask his forgiveness in attempting to summarize a book whose primary thrust is “the world is nuanced and defies summary.”

Overall, Gladwell makes three successive points to develop his thesis:

  1. We operate under a “Truth Default” that makes us very likely to believe the stories people are telling us. We’re good at detecting people who are telling the truth, but we are bad at detecting liars because we tend to believe that they’re telling the truth.
  2. Truth Default posture is critical for the functioning of society, as a world full of paranoid people could never get anything done. Being deceived by a good story is the price we pay for the ability to freely collaborate. While there are certain people who are paranoid and reject truth default, you can’t have all people reject truth default theory or the transactional costs of diligence would be too high to ever work together.
  3. There are specific scenarios where it seems that we are worst at detecting both lies and truths — which are what he calls ‘mismatches’ between reality and presentation. Two examples he gives are the false positive: Bernie Madoff’s clear, articulate, and confident demeanor despite sitting atop a web of lies; and the false negative: Amanda Knox, who was wrongly jailed for 4 years after being found guilty for the murder of her roommate because she did not demonstrate the appropriate signs of shock and guilt that the society expected from her.

The central tension of the book is that we can’t have a society where people are both generally trusting in their interactions with one another and also one in which we are good at detecting untruths. And finally, we are especially bad at detecting untruths when the presentation of the untruth is swaddled in the appearance of validity.

This framework really got me thinking about the drug development world, where we take academic science, then travel through a long and complicated process of validating that science, turning it into a drug, and clinically testing it in patients. For this industry, the central observation is: why are we so bad at predicting failures in clinical trials?

The most common answer to this question is, I believe, a cop-out. That animal and cell culture models of disease are too inexact approximations of human disease for us to know whether or not a drug will work in the complexity of the human body. My business is to evaluate the science that is chosen to get turned into a drug, and this explanation rings hollow to me as I see so many scientific works receive the funding to get turned into drugs despite [in my opinion] the lack of appropriate preclinical evidence.

Instead, I think we can use three parallels from Gladwell’s framework to understand the problem differently.

1. Truth Default Theory in Biotech

Because of “Truth Default Theory”, we have found ourselves in a situation where academics, drug developers, and pop science writers routinely exaggerate the conclusions drawn from data produced in the laboratory (and even in human clinical trials). This regrettable situation is caused by a perverse incentive structure pulling at each of the groups above. They must constantly be generating exciting new breakthroughs, building the momentum for their company’s lead programs, and to report those breakthroughs as soon as possible with as little time and resources as they can get away with spent validating the science (for the sake of capital efficiency!).

To me this tension is best highlighted by a report submitted to Nature in 2012 in which Amgen attempted to reproduce 53 studies of novel anti-cancer approaches published in high-impact journals, and succeeded in seeing the same effects in only 6 [2]. 89% of these ‘landmark’ cancer studies describing new drugs or approaches didn’t stand up to outside validation, yet we assumed upon publication that they were true or at least directionally true. Importantly, these studies suggested that it was not the unreliability of animal or cell culture models of cancer that were the problem, it was the lack of rigor with which the original authors tested their hypotheses.

2. We work together to do something that really matters

This posture of truth assumption in scientific data is not an unreasonable stance to take. Because on one side of the continuum we have the tragedy of unreproducible clinical and preclinical results, and on the other side we have the calamity of doing nothing new. More than 100,000 people die around the world from the diseases like cancer, heart disease, and Alzheimer’s EVERY DAY. I see it as a moral obligation to do something about this suffering and death, as do many of my colleagues.

It takes a village to build a drug, from partnering with a scientist and a university to spin a company out into a startup, raising investment from people who share your vision of trying something new, building up a large team of chemists and clinicians to take the drug forward, and then working with regulators and pharma companies to bring the drug to patients. If, as Gladwell suggests, there is an correlation between getting along well with others and failing to detect ‘lies’ [3] in the people around us, then we can understand that the majority of successful scientists and drug developers, those that can pull so many people together around their vision and cause are not paranoid loners, who Gladwell believes are predisposed to be natural lie detectors.

3. Science is full of accidental mismatches

Finally, there is the issue of mismatching. We are worst at detecting the truth if there is a mismatch between the fact and the presentation of the fact: where we receive false knowledge in a situation that we are confident of truth. I have never observed an ‘intent to deceive’ among scientists that I know, even when results are later shown to have been not tested rigorously enough. Ironically, I realize that I take this fact on the basis of trusting scientists by default, but let’s hold that aside for now.

If we assume that most scientists are trying to do a good job, publishing something interesting, then slightly over-claiming their data so that it will get into a high impact journal, cutting the corners on an experiment that would take an extra 6 months to perform because it won’t help that scientist’s career, then we have a dangerous situation.

This isn’t to say the scientist in the framework above has done anything “wrong.” If you’ve ever been a scientist, you know that the pressure to publish or perish, to consistently come up with and test novel, groundbreaking observations in as rigorous a way as possible is an Sisyphean task. It can seem like no study could ever be supported sufficiently. Sometimes, it’s most important to get an idea out into the world with the data that you have instead of sitting on it in an echo chamber for years trying to get everything just right.

Instead, the danger comes because the people who did the work, acting primarily in good faith, have to present themselves as being confident about their work in order to get published, get grants, or get press coverage. With enough of this, many inadvertently drink their own Kool-aid. They are brilliant scientists that do rigorous work and good experiments, these data were the results of just such rigorous and brilliant work, therefore, there is a reasonable expectation that the experiments will turn out this way again in the future and when tested from a different angle. The researchers are sitting on a lie that they don’t know is a lie [3]. The sort of mismatch that we humans find very hard to detect. And the drug developers on deck to take such hypotheses into the next stage are often all too willing to trust.

It’s sometimes feels like drug development is just a lot of empty promises. Photograph by Haley Lawrence

So what?

Taken together, what conclusions can we draw from this framework?

First, and most importantly, it’s not the “fault” of scientists or drug developers as individuals that we have lots and lots of failed clinical trials. Drug development is a business that requires exceptional team play, and truth default behavior is almost sine qua non to succeed in the space. If you’re going to create a team of people to work 10 years or more to build a dream of treating patients, you’re necessarily leaving out the paranoid loners who trust no one.

Secondly, if what I’ve outlined above is true, or at least more true than the alternative hypothesis, that preclinical models are all necessarily terrible approximations of human disease, then this is an incredibly optimistic realization! It means that there are things we can do scientifically to reduce the failure rate of clinical trials, and better allocate our limited resources to test more new drugs, accelerating our ability to deliver cures to patients.

So what must we do to perform better? I have two observations that characterize the best drug development organizations I know — groups that routinely bat way above average on transitioning preclinical hypotheses to clinical validation.

  1. Understand the reality of truth default bias and be wary of it, but remain friendly by definitionally separating “academic truth” from “development truth,” and make that difference okay for all parties. Get justifiably excited and trusting about a piece of data, and think, “As an academic publication this is wonderful — the first step towards a completely new way of targeting a disease!” and then follow that up with, “Now what can we do together to bombard this hypothesis in a few other models that even more closely represent the human disease state, now that you’ve generated the first data that gives us the rationale to do so?”
  2. Grow a culture within your organization of spending more time on preclinical models that test a hypothesis from multiple angles and in ways that most closely mimic human disease and treatment modalities, even when doing so is hard and slow. Because there is almost nothing worse for a drug development effort than failing in phase 2 or 3 clinical trials.

By living these two principles, I think we may be able to build teams of people that better straddle the central tension in biotechnology — working well with others while maintaining a healthy dose of friendly paranoia!

FOOTNOTES

  1. this book, by the by, is truly enhanced in its audiobook form, and it appears to me that Gladwell intended audio to be the primary way of experiencing the book as opposed to the usual counterfactual [ https://www.amazon.com/Talking-Strangers-Should-About-People/dp/B07NJCG1XS/ref=tmm_aud_swatch_0?_encoding=UTF8&qid=&sr=]
  2. https://www.nature.com/articles/483531a
  3. I fully recognize that ‘lie’ is a charged word in this context, so let me clarify that by ‘lie’ I mean very specifically: that one party believes that a hypothesis about a new therapy is well supported and deserving of clinical testing, but in fact further scrutiny absent ‘truth default’ can find ample reasons to believe the hypothesis is insufficiently supported.

--

--