Learning to evaluate research isn’t a knowledge problem. It’s a feedback problem.

Bryan Chung
Critical Mass
Published in
3 min readJun 25, 2019
Photo by Jon Tyson on Unsplash

Typically, the introductory course on reading research in most health professional programs looks like this:

1) Explain what a PICO is
2) Explain what simple statistics are
3) Explain basic research design
4) Hold a multiple choice test on the above.

What is this course for? Is it to teach the student how to see research as a partner? Is it to teach the student how to integrate it into their practice and into their life? Or is it to allow the student to pass a section of the test by regurgitating a set of formulas and learning to recognize patterns in multiple-choice question stems because that’s how the school keeps its credentials?

Let’s look at a different area for a moment. How to learn computer programming.
1) Explain basic syntax.
2) Use basic syntax to do something simple.
3) Expand on basic syntax to incrementally do more complicated things.
4) Do those more complicated things.
5) Inherent in every step there is a semi-frustrating but immediate feedback loop closure when your code doesn’t run.

The likelihood that you will propagate a mistake for the rest of your life on how to print “Hello World” is zero. You might propagate a style or organization mistake, but in terms of how to make the code run; it won’t run until you write it perfectly.

What’s the difference between approaching research interpretation education and computer programming education? Research interpretation is often approached as a knowledge problem, as opposed to a skill problem. You don’t know enough statistics. You don’t know enough about the funny terms. You don’t know enough about research design.

Some courses in research interpretation decide that the way to learn is to do, but the “do” is a research project. Sure, it’s related, but for efficiency in developing a learning framework, it’s crappy and involves multiple bureaucratic steps that create poor experiences.

Other courses will ask their students to write a review. At no point in the writing process is there a closure of the feedback loop. A student can go the entire semester down the wrong path to turn in an assignment that is consistent with that wrong path, but fail to actually move forward in their skill.

We would never explain a physical skill to someone, leave them alone for 2–3 months and then test them on the skill. We would not ask them to recognize the skill amongst other skills when the objective is to use the skill. That’s not what learning a SKILL is for. Why is this approach propagated in education about research?

Some would argue it’s a numbers issue. Class sizes are big; the challenge is to deliver material to everyone. The goal is delivery; and the acknowledgement of receipt. Formal education has become the Amazon delivery system; except there’s no way to return the product for your money back when you realize it doesn’t do what you need it to do.

We do lip service to Evidence-Based Practice. We want our learners to utilize all of the tools at their disposal to become great practitioners. That’s what Evidence-Based Practice is at its core. The reason why there is such an emphasis on research is the same reason why internship and clinical emphasize clinical care: it’s what students are missing.

Nearly every medical school has a small-group curriculum on communication. In my medical school, it was one preceptor to 5 students. It can be done. If you make a communication mistake, you are not leaving that session without knowing it. It’s a safe environment to make mistakes because it’s usually simulation. The same goes for physical examination maneuvers.

Why do we approach the research component of evidence-based practice differently? Why do we allow learners to falsely believe that because they have passed a test, that they have proficiency? Why do we ignore the EVIDENCE that learners are not feeling confident about these skills? Where is the safe environment for them to make mistakes and most importantly, how can immediate feedback be implemented in areas where fatal errors are made?

The research part of evidence-based practice has the same underpinnings as the other two parts of the framework. It’s time to make a change. And if the system won’t change, then it’s up to you to rise to your own change.

Find out more at http://criticalmass.ninja

--

--

Bryan Chung
Critical Mass

I want to change how we see our relationship with science in how we work and live. I’m a surgeon and research designer.