In the wake of Donald Trump’s victory over Hillary Clinton, pundits and politicians alike have wondered, “how did we not predict this?” Theories range from misrepresentative polling to journalistic bias to confirmation bias, fueled by the echo chambers of social media. These fervent debates about bias in politics had me reflecting on the role that bias plays in science and in R&D. Sampling bias, expectancy bias, publication bias… all hazards of the profession and yet science is held up against other disciplines as relatively bias-free by virtue of its data-centric approach.
Biopharma R&D has rapidly evolved over the last few years — it is more collaborative, demands greater speed to respond to competition, and challenges many notions of “conventional” drug discovery. In my reflections, I was curious whether this rapid evolution was a harbinger of biases not conventionally associated with science — and wanted to understand how we at twoXAR aim to stay aware and ahead of such biases.
First coined by economists Daniel Kahneman and Richard Thaler, the idea underpinning ownership bias is simple — “you ascribe more value to the things you own”. Dan Ariely, another scholar of behavioral economics, further elaborates on ownership bias as follows:
- Labor investment as a proxy for ownership: we are attached to the things into which we put effort, whether it’s redecorating our house or building furniture (the latter now a phenomenon unto itself — the IKEA effect)
- From physical objects to ideas: we invest so much in our idea that we are overly attached to its fate. This very article is proof of this behavior; the thought of its being edited reinforced my possessiveness over every word I wrote
Ownership bias in turn feeds many related biases, such as loss aversion and sunk cost fallacy. The latter is all too familiar in biopharma R&D, where high late-stage attrition rates are attributed to a combination of sunk cost fallacy, progression bias, and incentive structures for pipeline advancement. For this reason, Peck et al. propose implementing “quick-kill” strategies to enable project termination decisions to be made appropriately and earlier in the process.
At twoXAR, our version of “quick kill” — known as “qualify quickly” and intended to keep activities moving — has an added benefit of limiting both sunk cost and ownership bias. We submit ideas for review as we conceive them, to enable good ideas to develop and not-so-good ideas to be shelved (or refined). This way, we make room for the next “Eureka” moment as soon as possible without allowing undue attachment to or investment in any one idea to develop.
With advances in cloud computing and maturation of data science techniques, big data is increasingly important in the R&D process. Indeed, that’s how twoXAR got its start. As we refine our platform and analyze larger, more complex data sets, it’s easy to be awed by the insights that our platform generates into novel drug-disease matches. There are occasions, though, when that awe can ring false.
If you computationally analyzed clinical records of multiple sclerosis (MS) patients, you might be stunned to discover that birth control causes multiple sclerosis. Your mind could race as you contemplate the possibilities only to realize, with mild chagrin, that MS is three times more prevalent in women than men and that men are unlikely to be taking birth control.
The MS-birth control link is a cautionary tale for the dangers of automation bias — the tendency to rely on automated instead of human decision-making — and the foundation for our drug review meetings. By collectively assessing our platform’s top-ranked candidates for a disease, we apply rigorous scientific reasoning on each drug, improve our collective understanding of the disease, and identify improvements to computational methods. This is why we are steadfast in our conviction that big data’s role is to augment, not automate.
(This is also why we at twoXAR take a multidisciplinary perspective to drug discovery — leveraging a software-driven approach powered by a variety of underlying methods to analyze data spanning biological, chemical, and clinical — to minimize the number of false positives for the predicted efficacy of a drug. Find out about it here!)
Think about the last time you bought a car. Some of you may have paid sticker price, others may have insisted on a significant discount, or — if you’re like me — you may have returned home exhausted, wondering why cars are so expensive. In all three instances, we were influenced by the first piece of information we learned about the car — its sticker price. Welcome to the anchoring bias club.
It is human nature to fixate on one trait, usually the first piece of information we learn, when making a decision. Human nature reared its head in our rheumatoid arthritis (RA) drug review, when for an instant we were convinced that our platform was doing something strange (one of the few times that automation bias was not present). One of our top novel candidates seemed so improbable for efficacy in RA because of the first, decidedly unfavorable, trait that came to all our minds about this drug.
Our first instinct was to reject this drug and it was clearly heading towards a unanimous veto… until our CEO challenged us to consider why we were making this decision. Was it because of the aforementioned rigorous scientific reasoning? Or simply because of the first trait we held to be true? It took days of research to overcome our bias, at which point we included it in our shortlist of ten candidates for preclinical trials. And lo and behold — it was one of the top three candidates with in vivo efficacy signals, surfacing potentially novel underlying biology driving the pathophysiology of RA.
At twoXAR, we strive every day to minimize bias throughout the discovery process. Recognizing and mitigating bias — whether in computational methods or scientific approaches, whether on our own or with our partners — has enabled us to dramatically advance our understanding of diseases, identify new ways to match drug to disease, launch ten preclinical collaborations in the past year, and generate efficacy signals in 30% of candidates in preclinical studies (compared to an industry average of 2%). By incorporating “unbiased” into our core values, we ensure that it’s front and center in all our efforts.
Perhaps you’re reflecting on occasions at work where these or other biases influenced an outcome. Perhaps you’re still wondering if mitigating bias is really that impactful to the R&D process. Or perhaps you’re simply looking for a Dan Ariely book recommendation. In any case, we’d love to continue the conversation with you.
Curious about what other biases are out there? This wonderfully informative graphic breaks it down for you.