States Can Use Federal Stimulus Money to Evaluate Program Effectiveness

American Rescue Plan funds available to build capacity to conduct and support impact assessments

Navigate to:

States Can Use Federal Stimulus Money to Evaluate Program Effectiveness
Pew-MacArthur Results First Initiative
Kevin Ku Unsplash

As states continue to respond to the health and economic impacts of the COVID-19 pandemic, policymakers should look to use federal stimulus money to bolster efforts to regularly evaluate the effectiveness of programs and more broadly use evidence to invest in policies and programs proved to work.

The federal government is providing additional support to states through the American Rescue Plan Act (ARPA), which was signed by President Joe Biden in March. ARPA provides $195 billion in flexible funding for state governments, an allocation that creates a significant opportunity for decision-makers to address immediate challenges and to develop sustainable approaches moving forward.

The rescue plan represents a once-in-a-generation influx of federal funding and states should work to avoid the kind of missteps taken in response to earlier stimulus programs. Policymakers should aim to spend this money with a combination of one-time investments and by temporarily funding ongoing expenses until their economies fully recover. By taking a long-term view to strengthen their states’ fiscal health, policymakers can ensure that funds are spent appropriately in the near term without creating fiscal cliffs down the road.

One area of potential spending that balances these priorities is impact evaluations, which help state leaders make funding decisions based on evidence. Policymakers can use ARPA money to fund pilot projects and evaluate the results, a process that can help build immediate capacity to gather evidence and allow them to continue relying on rigorous research to inform budget and policy decisions in the future.

Why impact evaluations matter

An impact evaluation is a rigorous assessment of a program’s effectiveness, especially as it affects specific populations. That information then can inform policymaker decisions about which interventions to support, revise, or eliminate. Unlike outcome monitoring systems, these evaluations use experimental designs to move beyond trends in program performance to illustrate which elements of the program are driving positive results.

Impact evaluations allow state leaders to have the evidence they need to invest in what works and to take a long-term budget view. They can scale up pilot projects that are shown to be effective, improve programs with promising results, and replace those that fail to perform.

These evaluations, however, can be expensive, requiring highly trained staff and access to a variety of data. Fortunately, ARPA funds can be used “to improve efficacy of programs … including through use of data analysis, targeted consumer outreach, improvements to data or technology infrastructure, and impact evaluations [emphasis added].” Using this money to support impact evaluation would help states in the long run, increasing their capacity to use evidence to inform budget and policy decisions long after they have exhausted funding for pilot programs.

How states have invested in and learned from impact evaluations

Results First has partnered with several states working to increase their capacity to understand and use evidence to answer key policy questions and inform decisions. For example, after analysts working in New Mexico’s Legislative Finance Committee (LFC) observed consistently low literacy rates among children in the state, they began reviewing available data to determine possible underlying causes and find evidence-based solutions to consider. Identifying more access to pre-K as a potentially effective investment of taxpayer dollars, LFC first conducted a program evaluation in 2014 to determine if this would lead to positive results.

After finding likely positive long-term results such as higher reading scores, reduction in special education participation, and reduction in third-graders being held back, LFC recommended that policymakers commit $28 million in additional funding for early childhood programs with $6.5 million specifically for pre-K. Subsequently, LFC found in a 2020 report that the state’s percentage of program participants reading at grade level in both kindergarten and third grade had improved.

Several Results First partner states have strengthened their capacity to conduct impact evaluations such as the one above. With additional legislative funding, Minnesota Management and Budget (MMB) now conducts such evaluations through grants offered by the state’s Opioid Epidemic Response Advisory Council. The council and MMB’s Impact Evaluation Unit work together to conduct rigorous evaluations of state-funding programs in opioid education, prevention, treatment, and recovery services. The effort will help decision-makers improve the state’s response to the opioid epidemic.

Colorado, meanwhile, has long-standing funding opportunities available for impact evaluation as part of an effort to build a culture of evidence use across state agencies. The governor’s office administers an Implementation and Evaluation grant program to make individual grants to state agencies for evaluation efforts. Since fiscal year 2018, the state’s Office of State Planning and Budgeting has awarded about $500,000 annually to support program implementation or evaluation of outcomes. For example, such money supported the evaluation of the state’s School Bullying Prevention and Education Grant, the Colorado Opportunity Scholarship Initiative, and the Colorado Pretrial Assessment Tool. The governor’s office also supports agencies as they request additional funding from the legislature to conduct evaluations.

Impact evaluations provide a range of benefits to states. Policymakers can use ARPA funds to get started or further develop their capacity and build on momentum to move work forward more efficiently and effectively. The ARPA money can provide a one-time investment to stand up new evaluation units or finance pilot projects to demonstrate whether certain investments are worthy of additional support. As state leaders determine their priorities for ARPA flexible funding, they should consider supporting impact evaluations and other evidence-based policymaking reforms.

Sara Dube is a project director and Alex Sileo is a senior associate with the Results First initiative.

Policymakers Should Leverage Evidence to Inform Pandemic-Driven Budget Cuts
Policymakers Should Leverage Evidence to Inform Pandemic-Driven Budget Cuts
Article

Using Evidence to Inform Pandemic Budget Cuts

Quick View
Article

After difficult months dealing with the COVID-19 pandemic in 2020, many state policymakers have turned their attention to vaccine distribution and new pandemic-related concerns but are hopeful that an end to the worst may be in sight. State budgets, however, continue to be affected by the economic shockwaves caused by the rapid spread of COVID-19. Still, as state legislators grapple with how best to respond to a changing fiscal environment, they can take certain steps to avoid making indiscriminate across-the-board cuts to public programs.

Getty
Getty
Issue Brief

Using Data and Evidence to Make Strategic Budget Cuts

Quick View
Issue Brief

As the impact of COVID-19 reaches communities across the country, elected officials have been forced to assess how to cut spending in the face of sharp reductions in revenue and the changing needs of the people they serve.