Synthesizing Knowledge: Dealing with Bias & Asking the Right Questions

Interview with Dr. Ahmed Abou-Setta

CHI KT Platform
KnowledgeNudge
10 min readOct 17, 2018

--

By Trish Roche

Kate previously wrote a post on how knowledge synthesis is core to knowledge translation, in which she discusses the key components of systematic reviews and provides a brief description of scoping reviews. In this post, we thought we would dive a little deeper into the world of knowledge synthesis by interviewing Ahmed Abou-Setta (MD, PhD), Director of the George & Fay Yee Centre for Healthcare Innovation (CHI) Knowledge Synthesis platform. He has trained in Egypt, the Netherlands, and Canada (just to name a few locations).

Everyday Choices & the Need for Unbiased Evidence

One of the many things Ahmed learned from his mentors was the speed with which medicine changes — the discovery of new techniques and interventions means that physicians have to be continuously learning, growing, and keeping up-to-date with the latest evidence (so having it readily available and vetted, through evidence synthesis, is important). He also learned that “there is a lot of anecdotal evidence, misinformation, and bias out there.” For example, he says, “we’ve always been taught that older people know more,” but in medicine you may have residents who’ve started their training later in life. “In the eyes of patients, [these residents] look like they’ve been in the field a long time” and are assumed to know more, even if they’re the most junior person in the room.

“What we believe to be true (e.g. an intervention or drug being effective) and most things we do on a daily basis that we believe are beneficial (i.e. individual activities like cleanses, yoga, drinking 8 glasses of water, etc.) — I would say 99% of the time, there is no conclusive evidence to support our beliefs.” It’s often just what we’ve been taught, what we’ve seen limited supporting (and often anecdotal) evidence for, or is in some other way related to our own internal biases (as Pat discussed in a previous post about nudging). In fact, some of our daily activities may actually be counterproductive or even harmful — Ahmed brings up the example of the excessive use of anti-bacterials, and how “more is not always better.” There is a lot that we don’t know, and “what we consider to be the truth often becomes very muddy, especially when you start adding to it the power of marketing and persuasion” (Editor’s note: think along the lines of Goop and detox diets).

Even for trained methodologists, who regularly synthesize evidence, these biases can be difficult to deal with. As Ahmed tells us, it’s not as simple as saying “if I use this algorithm, if I use this model, then it’ll automatically eliminate bias — it goes back to the old saying, ‘garbage in, garbage out’” — biased evidence creates biased results. Ahmed uses the example of Amazon’s attempt to create an AI (artificial intelligence) recruiting tool that turned out to be systematically biased against women, since it was based on patterns in resumes submitted to Amazon over a decade — most of which were from men. The problem, according to Ahmed, is that “we’re giving it biased information to begin with — all [AI] is doing is processing the same data over and over, so they’re creating the same bias on a much larger scale”. So we don’t need more information; rather, we need unbiased information to come to correct conclusions.

Making things even more challenging is that reviewers can’t actually measure bias — only the risk of bias. The example he uses is a hypothetical situation in which a researcher is paid by a company to test one of their products. As Ahmed explains, “you may assume there is a huge bias there — and there’s definitely a huge risk of bias”, but you can’t actually know whether that researcher is being completely objective; is biased due to their competing interest; or even if there is a negative bias — perhaps the researcher has decided that because they’re being paid by the company, they want to guard themselves so much against bias that they become biased in the opposite direction — against the company that they’re working for.

“That’s where you try to be as objective as possible, and try to assess the risk of bias.”

Trusting Evidence & Making Decisions

So how can knowledge synthesis help?

“Knowledge synthesis tries to provide an unbiased account of what is the truth. A lot of the work we do is trying to remove the layers of bias from research, to identify what we know, what we don’t know, and if we know something, how much we actually believe it.” Essentially, Ahmed tells me, the answer to any synthesis question fits into one of three categories:

  1. We don’t know;
  2. We know, and we are confident about what we know;
  3. We have an idea, but the evidence is pretty balanced on either side of the issue.

The majority of synthesis results fall under category #3. Ahmed explains that this is why common perceptions of science are constantly changing (e.g. cholesterol is bad one day, and not so bad the next). He says, “it’s a moving target, because the evidence itself is often conflicted by so many biases, new evidence is being added, and the answer can keep changing.” Nonetheless, Ahmed believes it’s important to acknowledge that the evidence exists (even if we don’t trust it) to ourselves, our patients, and to the public.

However, he acknowledges that “for decision-makers, it’s often not that simple” to wait for a concrete answer before acting. “They have to make a decision one way or the other, they have to choose, and that makes their lives more difficult. From the knowledge synthesis perspective, we just try to show them everything we know about the evidence and what biases or other factors could be affecting it, but ultimately they have to make those decisions.” Ahmed is of the mindset that “people who work in knowledge synthesis should not be making conclusions, or conclusive decisions, simply from the evidence they are synthesizing.” This is (at least in part) due to the many additional factors that go into healthcare decisions, such as patient preferences, religious reasoning, political reasoning, availability, cost, cost-effectiveness, etc. Essentially, Ahmed says “our role is to present the evidence, but it’s not our place to make a decision for others about how they use that evidence” — that’s where a larger team of experts and stakeholders comes in — and subsequently knowledge translation, to help determine how to deliver those messages.

The Rise of the Systematic Review

Systematic reviews are touted as the ‘gold standard’ of evidence, upon which important decisions (such as those in healthcare) should be based. But that wasn’t always the case: “if we go back to the 1970s, 80s, and 90s, the majority of top research included editorials, expert opinions, and consensus panels.” The way Ahmed describes it is, “in 1980, a professor was doing a narrative review, and in 2020, he’s doing the same review, and calling it a systematic review”. According to a small project Ahmed worked on a few years prior, by 2014 in PubMed, “we were actually publishing more systematic review publications than randomized controlled trials.”

As Ahmed tells me, “everyone — knowing or not knowing — has jumped on the bandwagon of systematic reviews and extensions of it like meta-analyses, network meta-analyses, and so on — with or without the training that they need, with or without the rigour, with or without an understanding of the biases that they will be working with. We are seeing a lot of research [publications] coming out that are claiming to be systematic reviews, that have poor methodology, they don’t follow the expected research rigour that a review or synthesis in general should go through — and maybe even worse”.

Part of the problem, he says, is that “there’s a lot of pressure by senior researchers and supervisors that their students and residents need to undertake systematic reviews. For example, some supervisors will tell their students ‘you’re here for a year, I want you to do six systematic reviews’. Well, obviously you don’t know what a systematic review is if you want them to do six in one year. You can go to the library and do a literature search, or a report in a month or two, but you aren’t going to [be able to complete] a proper systematic review in two months unless you really know what you’re doing.”

Asking the Right Question

According to Ahmed, the biggest problem (and one that the Knowledge Synthesis platform is very used to dealing with) is asking the right question. Decision-makers and politicians have agendas (such as decreasing costs, wait times, etc.) and are looking for specific results. Often though, the question they come up with won’t actually give them the answers they need. “Working with people to understand what the question is can often be extremely difficult.”

For example, a client may come in and say they want to know which of two drugs is more effective. But what they’re really looking for is the best option for their specific situation — which isn’t just about drug efficacy, but needs to account for other factors like cost effectiveness, availability, side effects, and compliance. The example Ahmed likes to use in his introductory Knowledge Synthesis lectures is this scenario: “Your grandmother falls and breaks a hip — she goes to the hospital and has surgery, and you go to visit her in the hospital, and you notice she’s in pain. The nurse says ‘well, that’s normal for her age’, but you’re not convinced of that, so you want to do research — what’s the question that you ask? It may sound like a very simple question, something like ‘what’s the best pain medication?’, but that’s actually a really bad question for an evidence synthesis.”

He explains further, “there are so many different aspects — what age are we looking at? Are we concerned with the sex of the individual? Inside or outside the hospital? What about the site of pain? Side effects? What drugs are available in that particular hospital? Outpatient or inpatient? What about delirium, other co-morbidities? — there are so many aspects that go into answering that question, because it can go in so many different directions. If you ask a very broad question, you’re going to get a very broad answer. In the end, you want to know ‘what should be done for my grandmother?’, which depends on her specific context — contraindications, history that needs to be taken into account, etc.”.

Ultimately, Ahmed says, “it’s better to start with a very, very specific question and you can keep widening as much as you want to the point where the answer you get is [both] reasonable and useful in your specific situation”.

Effective Evidence Synthesis

I asked Ahmed how researchers and other clients who are looking to conduct evidence syntheses should approach the process to get the best possible outcome.

When someone approaches the Knowledge Synthesis platform, he poses questions to stimulate thought on points that they’ll discuss in an in-person consult. This includes questions like “what population are you considering?” and “what’s the setting/context of interest?” (Editor’s Note: interestingly, these are also the first two questions we ask in KT, as described by Carly in her blog about writing KT into a grant application). For reviews of interventions, he’ll ask clients to think about PICOTSS (population, intervention, context, outcomes, times, setting and study design) criteria. However, many of the reviews the Knowledge Synthesis platform works on are concerned with predictors, associations, models of disease, or cost-effectiveness, rather than just interventions. In any case, before considering a review, researchers should ask themselves “what research has been done before? What has been done well, and what hasn’t? What should have been done differently?” to begin to critically analyze the existing research in the field and work towards building their question.

The Future of Knowledge Synthesis

As the Knowledge Synthesis platform is part of CIHR’s Strategy for Patient-Oriented Research (SPOR) Manitoba SUPPORT Unit, current reviews are focused on healthcare and patient-oriented research. But the scope of their work is continually increasing to include looking more broadly at the evidence, to include things that may not be directly related to healthcare, such as animal models, epigenetics, and other types of research that may not be specific to diseased populations. Nonetheless, Ahmed says, “the principles remain the same.”

Since my background is in pre-clinical research with animal models, I couldn’t help but probe a little further about this type of work. Ahmed tells me that review methodologists have realized that in terms of standards, compared to healthcare research, synthesis work in animal models is “very far behind.” But he’s hopeful.

“We already know the pathway — we have better methods, we know how to report better, and how to analyze studies better. We can use what we know in animal models, we can determine which animal models are best to transfer findings to humans.” While not directly related to healthcare, these models are critical to discovering the underlying mechanisms of disease and the early development of interventions to be studied in the clinical context.

Knowledge synthesis is spreading out to other areas as well — looking at occupational health and safe work practices (back pain from sitting all day, anyone?), sociology, and even information technology — because it provides “a much less biased view of the evidence than we are traditionally used to.”

The Rewards of Reviews

Some of the work the Knowledge Synthesis platform has done has had major impact, including their work with hydroxyethyl starch (HES), a former treatment for massive fluid loss (e.g. from hemorrhaging). The review team (lead by Ryan Zarychanski, former director of the Knowledge Synthesis platform) determined that HES actually increased the risk of death and kidney damage. Their findings resulted in a black boxed warning from the US Food and Drug Administration, calls from the European Medicines Agency to remove HES from the market, and Health Canada to release a warning about its use in critically ill patients with certain health conditions.

I asked Ahmed what the most rewarding part of his job is (aside from a paycheque). He told me that “when you work with a patient, you can affect one life. With knowledge synthesis, you can affect many more lives, on a much larger scale.”

Need help with your evidence synthesis? Looking to collaborate with methods experts? Check out the services offered by the Knowledge Synthesis team on our website.

About Knowledge Synthesis

The Canadian Institutes of Health Research (CIHR) defines knowledge synthesis as “the contextualization and integration of research findings of individual research studies within the larger body of knowledge on the topic.” According to CIHR, syntheses must be transparent and reproducible in their methods, which may be quantitative or qualitative. Syntheses may take the form of systematic reviews, may result from consensus activities, and/or may include realist reviews, scoping reviews, narrative synthesis, meta-syntheses, meta-analyses, practice guidelines, and more.

About the Author

Trish Roche (she/her) is a knowledge broker with the George & Fay Yee Centre for Healthcare Innovation (CHI).

--

--

CHI KT Platform
KnowledgeNudge

Know-do gaps. Integrated KT. Patient & public engagement. KT research. Multimedia tools & dissemination. And the occasional puppy.