Researchers Offer Guidance to Explain Scientific Impact

4 guidelines provide a framework for evaluating how research informs policy and practice

Navigate to:

Researchers Offer Guidance to Explain Scientific Impact
Researchers’ guidance can help clarify how science is used in decision-making.
Artur Debat Getty Images

Researchers, policymakers, funders, and other thought leaders are increasingly interested in understanding the impact of science on policy and practice. But the types, time frames, and measures of scientific impact are diverse and not always well understood.

Scientific impact can be difficult to evaluate because outcomes can play out over long time frames and often take place in complex, dynamic systems. For example, research findings about disparities in health or education may influence health or education policy. However, those effects can take several legislative cycles or administration changes to take root. Further, the integration of these findings into policy and practice is often heavily influenced by advocacy efforts, current events, and policy windows, making it hard to tie the effects directly to a specific scientific finding or initiative.

Different disciplines have independently developed research impact frameworks, but it’s unclear how similar or transferrable these frameworks are. To address this gap, The Pew Charitable Trusts supported researchers to review existing frameworks that have been used to steer evaluations of scientific impact in the fields of health, environment, education, climate, and international development. The review reflected a trend toward a more expansive and nuanced conception of research impact, moving far beyond traditional evaluations that rely on publication counts or journal quality.

The researchers detailed four broad recommendations for assessing scientific impact that were included in many of the frameworks they reviewed. We present these as considerations for selecting among approaches, rather than detailed prescriptions for choosing an evaluation framework.

  • Start with a clear definition of impact.
  • Consider how you measure.
  • Track changes that occur during the project.
  • Plan for and measure the unexpected.

Start with a clear definition of impact

All evaluations need specific expectations to establish a benchmark against which changes to expected or intended impacts might be compared. Choosing an appropriate definition is critical because policy change may not be a reasonable expectation. Rather, impacts such as changes in attitude, problem framing, or trust may be more likely and may provide more lasting outcomes.

Consider how you measure

Evaluations that use simple rubrics with numerical impact scores can be useful for project managers and large research organizations. However, emphasis on numerical scores can lead participants to game the evaluation system, tailoring research to meet overly simplistic definitions of impact.

Further, narrow quantitative measures can miss important or more subtle consequences of a project. For example, changes in attitude, in the framing of a problem, or in relationships can be important outcomes in themselves, even if they are not the intended impacts. However, these changes are often not captured by frameworks that focus on numerical scores.

Track changes that occur during the project

Research impact frameworks should also assess preconditions for outcomes. For some research projects, the desired outcomes include changes in stakeholders’ attitudes about the feasibility of a specific policy solution. When attitude change is an intended outcome, the impact assessment framework should track related indicators over the course of the project, such as whether participants are asking new or different questions about feasibility. 

Plan for and measure the unexpected

Although it is important to be clear about expectations and assumptions at the beginning of a project, evaluations should have some components designed to capture unintended and unexpected outcomes, both positive and negative.

One simple way to account for the unexpected is to include open-ended questions in the evaluation. For example, in addition to rigid rubrics with predefined criteria for assessment, ask: What changed? Who changed? How do you know? These questions can surface outcomes that may uncover critical impacts.

Angela Bednarek is a project director and Ben Miyamoto is a principal associate with The Pew Charitable Trusts’ evidence project.