An Evidence-Informed Approach to Strengthen Your Next Implementation Science Proposal

Leah Crockett
KnowledgeNudge
Published in
5 min readNov 22, 2018

By Leah Crockett

Crable et al. (2018)

Here in Knowledge Translation at the George & Fay Yee Centre for Healthcare Innovation (CHI), our Director (Kate) and Manager (Carly) have previously written a number of posts relating to KT grant writing, including Kate’s 10 key ingredients for implementation grant proposals, the pros and cons of KT sections in grants, and Carly’s tips on writing KT sections into your grant proposals. In this post, we summarize a 2018 article led by Erika Crable (a Research Fellow in the Evans Center for Implementation and Improvement Sciences [CIIS] at the Boston University School of Medicine) that aims to standardize the approach of evaluating implementation science research proposals.

The Issue

Given the quick expansion of the field of knowledge translation (KT) and implementation science, reflecting a paradigm shift within health research, we have increasingly seen the development of tools to help researchers prepare for KT and implementation grants, like Melanie Barwick’s KT Planning Template and Proctor et al.’s 10 key ingredients for implementation grant proposals. But how do guidelines for grant reviewers compare? According to the authors, not many guidelines exist for those evaluating KT and implementation proposals. And though guidelines exist for researchers, how much information should researchers be including to satisfy the expectations of grant reviewers? How are researchers who use these guidelines performing in the grant review process?

The Study

The team from Boston University set out to examine these questions, following their experience reviewing a set of 30 pilot implementation science projects submitted to the CIIS using the typical National Institutes of Health (NIH) scoring criteria. Through this process, the team acknowledged a need for new tools to help investigators, research stakeholders, and grant reviewers develop and evaluate grant proposals.

The team observed a mismatch between the NIH framework and the criteria and information needed to guide the review of implementation science grant proposals (particularly for those with less experience in implementation). In response, the authors developed a new practical tool to help both reviewers and grant writers develop higher quality proposals and evaluations. The tool (ImplemeNtation and Improvement Science Proposals Evaluation CriTeria — or INSPECT) is a scoring system based on Proctor’s 10 key ingredients and expert consensus. They then evaluated the INSPECT system by re-scoring these 30 research proposals to examine the usefulness of this scoring system and to identify common gaps in implementation science research proposals.

A snippet of the INSPECT tool — download the full article for more. Crable et al., 2018.

The Results

The team found that most grant applicants scored poorly based on the newly developed INSPECT tool. Although many applicants seemed to adequately achieve item #1 of Proctor’s 10 key ingredients (describing the quality care gap), most proposals scored poorly on all other nine elements, suggesting a need to improve reporting within implementation grant proposals. In fact, most proposals actually scored a 0 in other ‘ingredients’, receiving scores of 0’s at rates of 23–70% within the specified categories. On a a scale of 0–4, between half and two-thirds of proposals scored 0 in the following categories:

Although there is evidence of a general lack of consistency and agreement between grant reviewers, this study found that the new INSPECT tool actually helped to improve consistency and standardize agreement between reviewers.

The Future

With regards to evaluation, this approach builds on Proctor et al.’s 10 key ingredients outlining key elements for all implementation science grant proposals. This tool further strengthens the 10 ingredients concept and provides guidance and additional tools for both grant writers and reviewers to improve the quality of implementation science research proposals. By referring to this tool when developing your next implementation science proposal, applicants can do a better job of anticipating what reviewers might be looking for with regards to the 10 ingredients, and how they might score the proposal based on each item.

As a reminder, here are a few questions that outline those 10 ingredients you should ask yourself and be sure to include when preparing an implementation science grant:

As we noted, the INSPECT scale is early in its development, and has only been evaluated based on 30 proposals, in a US context. Future work should aim to evaluate its usefulness in a different political and funding contexts (such as Canada) to further enhance its utility. Alternatively, context-specific tools could be developed based on criteria from specific funders — in the Canadian context of health research, this would include the Canadian Institutes of Health Research (CIHR)’s scoring criteria.

Nonetheless, the INSPECT tool provides us with a sense of which elements are often missing from implementation grant proposals (many!), and where we as researchers should focus our attention. As Kate mentioned previously, if you cannot answer “yes” to all of ‘10 ingredient’ questions, you should probably hold off on moving forward with your implementation proposal/project until you can. The INSPECT tool takes us one step further by providing guidance not only on the questions we should be asking, but also how — and in how much detail — this might be evaluated by reviewers.

For now, consider this another resource to add to your KT toolbox to help you think through and prepare for your next grant application! Happy writing.

About the Author

Leah Crockett is a doctoral student in the Department of Community Health Sciences at the University of Manitoba. Find her on Twitter: @leahkcrockett.

--

--

Leah Crockett
KnowledgeNudge

Child Health, Health Equity, Integrated Knowledge Translation