From Research to Reality: Crafting Policies with Evidence That Matters

Deborah Stine
SciTech Forefront
Published in
8 min read5 days ago
Source: White House National Science and Technology Council

In today’s complex world, evidence-based policymaking is more critical than ever, especially when shaping science and technology policy. While scientific evidence should certainly inform decisions, most research focuses only on effectiveness, and policymakers need more than that information to make decisions that benefit society.

What is Evidence-Based Policymaking?

Evidence-based policymaking has many definitions, but perhaps this definition of evidence from the White House Office of Management and Budget (OMB) Circular No. A-11 (2022) is a good starting point:

“ Evidence as a general construct should be viewed and approached as the available body of facts or information indicating whether a belief or proposition is true or valid.”

The Evidence-Based Policymaking Collaborative developed the following set of principles in its 2016 report:

Source: Evidence-Based Policymaking Collaborative

The Foundations for Evidence-Based Policymaking Act of 2018 requires that federal agencies develop evidence-building plans, evaluation plans, and capacity assessments. President Biden’s 2021 “Memorandum on Restoring Trust in Government Through Scientific Integrity and Evidence-Based Policymaking” states

“It is the policy of my Administration to make evidence-based decisions guided by the best available science and data. Scientific and technological information, data, and evidence are central to the development and iterative improvement of sound policies, and to the delivery of equitable programs, across every area of government.

Scientific findings should never be distorted or influenced by political considerations. When scientific or technological information is considered in policy decisions, it should be subjected to well-established scientific processes, including peer review where feasible and appropriate, with appropriate protections for privacy.

Improper political interference in the work of Federal scientists or other scientists who support the work of the Federal Government and in the communication of scientific facts undermines the welfare of the Nation, contributes to systemic inequities and injustices, and violates the trust that the public places in government to best serve its collective interests.”

Source: National Science Foundation

What are Evidence Banks?

Evidence banks are one proposed tool to help policymakers and other decision-makers find scientific and engineering research to support their decision-making. These repositories are designed to consolidate and synthesize scientific research and data into accessible formats. And according to a recent Nature article entitled “Scientists are building giant ‘evidence banks’ to create policies that actually work,” they are gaining popularity:

“Funders are injecting tens of millions of dollars into an ambitious plan to solve the biggest problem in science advice: supplying evidence to governments. Their goal is to build a system that allows policymakers worldwide to generate rapid syntheses of science that help them to make evidence-based policies aimed at solving critical issues such as climate change.”

It builds on a previous Nature article in June 2024 entitled “What’s the best way to tackle climate change? An ‘evidence bank’ could help scientists find answers: Synthesizing research on which policies are most effective is a key priority in climate science, advocates say.”

According to proponents cited in the article, evidence banks will synthesize many studies to provide a cohesive message, similar to what has occurred in the medical sciences (though they also note that the UK has stopped funding a good portion of these “Cochrane” studies). As stated in the article:

“Although researchers generate vast numbers of studies in areas relevant to policy, syntheses that show the weight of evidence on a topic are rare in many fields, and are not routinely used to guide policymaking. “There is huge demand” from policymakers for such syntheses, says Jen Gold, director of research at the Economic and Social Research Council (ESRC), a UK funding agency. “But supply doesn’t match it.”

However, the article also notes that

“Producing syntheses is usually slow, difficult, and expensive. Researchers undertaking a systematic review must scour databases worldwide of published and unpublished work for potentially relevant studies. Then they whittle a longlist of thousands of studies down to the most relevant few, rate their reliability, extract the data and combine the results, sometimes using a statistical method called a meta-analysis. Even once complete, evidence syntheses often don’t reach policymakers and quickly become out of date as fresh research pours out.”

And the degree of investment, in my opinion, is high relative to the benefit:

“ESRC and Wellcome, the biomedical-research funder in London, announced on 21 September that they are investing £9.2 million (US$12.2 million) and around £45 million, respectively, over five years in databases and tools that can help to synthesize research.”

Source: Stine, D., From Expertise to Impact: A Practical Guide to Informing and Influencing Science and Technology Policy

Evidence Banks are Not Sufficient for Policymaking

I believe efforts like evidence banks fall under the “knowledge deficit” model, which has been proven false. This model is the concept that many scientists continue to believe — If only people knew this information, a lightbulb would appear above their heads, and they would make the decision we think they should make. A better model focuses on not telling people what we think they should know but instead asking them what they want to know and then providing them with that information.

As I’ve discussed in my book From Expertise to Impact: A Practical Guide to Informing and Influencing Science and Technology Policy, making decisions for anyone, not just policymakers, is based on more than effectiveness—the degree to which a societal goal is met — which is the common focus of papers outside of the social sciences.

Let’s consider the decisions many of us make, like where we live and how we get from one place to another. If we look at effectiveness, we could live in a cave in the middle of nowhere and be sheltered. We could also effectively get from one place to another by walking. Yet, few of us live in caves or walk anywhere.

Why? Because we have other considerations beyond just effectiveness, just like policymakers. We must consider efficiency — how much an action costs relative to its effectiveness. For example, we may live in a cave and walk everywhere, making finding employment to feed our families challenging. We also need to consider equity — the impact of those actions on our family members. And we need to consider the ease of political acceptability — the degree to which our family is willing to accept our actions.

Answering these questions will provide policymakers with the information they want. I appreciate papers and reports that synthesize information and have developed many myself, but those reports are developed by working with expert committees that can judge the quality of the analysis and put it in context. Just like Chat GPT needs experts to judge the results and put them into context, I believe the same will be true with evidence banks.

Source: National Institutes of Health

Evidence-Based Policymaking Needs More Social Science Research, Not Evidence Banks

We can best support evidence-based policymaking by developing research agendas and supporting social science research to answer questions about effectiveness, efficiency, equity, and ease of political acceptability. As stated in the May 2024 White House National Science and Technology Council (NSTC) “Blueprint For The Use Of Social And Behavioral Science To Advance Evidence-Based Policymaking”:

“The social and behavioral sciences examine if, when, and how fundamental human processes influence outcomes and decisions. Human behavior is a key component of every major national and global challenge we face. Infectious and chronic diseases, national security, public safety and trust, climate and disaster preparedness, economic opportunity, traffic safety, and educational and employment disparities are just a few examples. The success of all federal government initiatives relies on human behavior in some way.”

But Do Social Scientists Want to Do This Research and Analysis?

When I have provided the perspective at, for example, an international S&T policy conference that social scientists should play a greater role in providing the evidence needed for policymaking, I’m often told that such work is “consulting” and not sufficiently investigator-driven to be worthy of academic research. As you can imagine, I fundamentally disagree with this concept. I believe practical research that helps decision-makers should receive support within the academic community and not be looked down upon. Yet, as with much of academia, changing this mindset is not easy.

Next Steps

Just like Chat GPT needs expertise to understand the results, I think the same is true of these scientific evidence banks. In addition, public policy involves more than just a single application at the public policy proposal stage and a variety of tools and perspectives.

In order to advance evidence-based policymaking, foundations, governments, and other organizations need to focus on funding social science research rather than using those funds to build evidence banks. The challenge, in my opinion, is not policymakers finding existing evidence but not having evidence available that meets their policymaking needs.

As a result, I’d like to reiterate these recommendations from the NSTC report mentioned earlier:

‘To advance the use of social and behavioral science in policymaking, we recommend that federal decision-makers incorporate the following steps into their processes:

1. Identify policy areas that would benefit from a better understanding and application of human behavior and outcomes;

2. Consider potential social and behavioral science insights that affect relevant policy outcomes while identifying the consequences of these insights for policy and program conceptualization, design, and implementation;

3. Synthesize available knowledge to identify promising practices with a strong body of evidence for effectiveness;

4. Identify the most appropriate ways to translate these insights into action given available policy tools;

5. Implement and disseminate policy and program information informed by social and behavioral science-informed approaches; and

6. Evaluate efforts through rigorous evaluation using the most appropriate social and behavioral science methods and available data.”

I agree with all of the above recommendations but would like to supplement them by stating that for these recommendations to occur, it will take more

(1) Spend the investments described in the Nature article (and elsewhere throughout the world, I suspect) on research focused on the questions policymakers want to be answered instead of the questions researchers think policymakers want to be answered.

(2) Focus that investment on researchers interested and willing to do this work and hiring, tenure, and proposal review committees willing to view practical research that enhances societal decision-making as equal in quality and importance as investigator-driven research.

(3) Invest in studies, such as those led by the U.S. National Academies of Sciences, Engineering, and Medicine and similar international bodies, that synthesize research results for policymakers rather than presume policymakers will search a database for this information and then be able to interpret it.

This article is from my LinkedIn Inform and Influence S&T Policy Newsletter, which serves as a practical guide to the victories and struggles in Science and Technology Policy and how to make a difference. You can subscribe to it on LinkedIn.

--

--

Deborah Stine
SciTech Forefront

Dr. Deborah D. Stine is Founder of the Science and Technology Policy Academy, an Independent study director and consultant, and co-editor of Forefront on Medium