Wisconsin Researchers Work With State and Local Agencies to Evaluate Programs

Collaborations with academic partners help governments assess outcomes

Navigate to:

Wisconsin Researchers Work With State and Local Agencies to Evaluate Programs

Because state and local governments often lack the resources, time, and expertise to conduct effective evaluations of public programs, they sometimes turn to external partners, such as local universities, to do the analysis or provide technical help. In Wisconsin, for example, state and county agencies regularly partner with the University of Wisconsin Population Health Institute (PHI) to assess a range of human services programs.

These evaluations can be powerful tools to gauge the effectiveness of public programs. They can inform decisions about what’s working and what’s not; how to improve, scale up, or scale back certain services; and how to allocate limited public funds. Developing partnerships such as those in Wisconsin can help address the resource constraints on these governments.

For example, the Wisconsin departments of justice, corrections, and health services and the Director of State Courts Office have worked with PHI—which promotes evidence-based approaches to policy and practice—to evaluate the state’s Treatment Alternatives and Diversion (TAD) program since 2007. TAD—a collaboration of four human services agencies—provides evidence-based case management and treatment to nonviolent offenders to improve their welfare, reduce recidivism, and shrink prison and jail populations. When the agencies realized that the funds designated for evaluation and technical assistance were inadequate, they tapped into PHI’s expertise and were able to share the cost of the institute’s services.

PHI’s multiple evaluations of TAD demonstrated that the program had helped improve outcomes—such lowering recidivism rates and reducing time spent incarcerated—and had delivered positive returns on the state’s investment. Ultimately that led to a large expansion of the program after representatives of the TAD state partner agencies and PHI staff presented evaluation results to the Statewide Criminal Justice Coordinating Council, the State Assembly Corrections Committee, individual legislators, and others as part of a broader effort to present evidence of the TAD model’s effectiveness.

Still, not all state agency-PHI partnerships focus on demonstrating impact. Some look at related research tasks, reviewing a program’s evidence base, for example, or advising agencies on appropriate data collection.

“We’ve also done a lot of work in substance use prevention, working with the state agencies as they make decisions about what programs should be funded by the subgrantees and what shouldn’t, helping them to manage and educate the subgrantees on the evidence base and program design options, design evaluation, and [to] collect appropriate data,” said Paul Moberg, research professor and senior faculty adviser of PHI’s evaluation research program. “[The evaluation work] has had a positive impact.”

PHI has even begun placing staff in state agencies—which often do not have enough staff to meet all programmatic and evaluation needs—to help expand their capacity to do this work. In this arrangement, the PHI employees get access to needed resources and collaborate with university staff while embedded in the agency. As permanent positions become available, some move to full-time state employment.

PHI leaders have identified several factors that enhance these research-policy partnerships. They are:

  • Distribution of roles: The partnerships require agreement over who will be responsible for key activities such as collecting and analyzing data or writing up results. PHI usually manages the data collection and evaluation components, while the state agencies manage subcontracts—with communities, counties, and local nonprofits, for example—and implementation of programs.
  • Collaboration: Both parties should make sure that the programs collect appropriate data and, whenever possible, use evidence-based and best practices. They should jointly develop evaluation designs, measures, and other study details to ensure that the assessments can be best used in the agency’s decision-making.
  • Capacity and support of agency staff: Staff on both sides of the partnership should have the capacity and willingness to collaborate on the research. For agency staff, this means not only possessing the technical and analytical skills to address questions about research design and use but also being invested in the evaluation and its success.
  • A teaching role for researchers: Researchers should be prepared to educate agency colleagues and others about the challenges and importance of evaluation design so they can be understood by nonexperts. For example, they can describe how different types of evaluations work or explain what it means if treatment and comparison groups demonstrate similar—or different—results.
  • Flexibility around timing: Because there are often competing demands, such as political and academic schedules, the informational needs of the agency, and the requirements of the ideal study design, both parties should collaborate on timelines that address the most critical factors.
  • Strong relationships: Sustaining partnerships involves continuous relationship building and communication. PHI’s long-term partnerships with state agencies tend to be based on recurring or sequential grants developed with the collaborating organizations. As important are the relationships and trust developed over time, which often start with a single collaboration. 

PHI’s collaborations with state and local agencies provide policymakers with evaluation expertise not otherwise available, and that gives them the data they need to make more informed decisions about the programs they fund.

For more information:

Sara Dube is a director and Priya Singh is a senior associate with the Pew-MacArthur Results First Initiative.

Targeted Evaluation
Article

Targeted Evaluation

Quick View
Article

Targeted Evaluation

In 2014, the Pew-MacArthur Results First Initiative identified five key components of evidence-based policymaking: program assessment, budget development, implementation oversight, outcome monitoring, and targeted evaluation. Taking into consideration and implementing one or more of these components can help states and counties use the Results First evidence-based policymaking framework in ways that yield meaningful changes for their communities.