Implementation Oversight for Evidence-Based Programs

A policymaker’s guide to effective program delivery

Navigate to:

Implementation Oversight for Evidence-Based Programs

Editor's note: This brief is one in a series about the five key components of evidence-based policymaking as identified in “ Evidence-Based Policymaking: A Guide for Effective Government,” a 2014 report by the Pew-MacArthur Results First Initiative. The other components are program assessmentbudget developmentoutcome monitoring, and targeted evaluation.

Overview

There is a growing consensus that rigorous evidence and data can and should be used, whenever possible, to inform critical public policy and budget decisions. In areas ranging from criminal justice to education, government leaders are increasingly interested in funding what works, while programs that lack evidence of their effectiveness are being carefully scrutinized when budgets are tightened. As the use of evidence-based interventions becomes more prevalent, there is an increasing recognition that it will be critical to ensure that these programs are effectively delivered. A large body of research now shows that well-designed programs poorly delivered are unlikely to achieve the outcomes policymakers and citizens expect.1

Government leaders can best ensure that they see the benefits of evidence-based programs by building capacity that supports effective implementation. This brief, one in a series on evidence-based policymaking published by the Pew-MacArthur Results First Initiative, identifies four key steps that state and local governments can take to strengthen this implementation effort:

  1. Require agencies to assess community needs and identify appropriate evidence-based interventions.
  2. Create policies and processes that support effective implementation and monitoring.
  3. Support service providers and staff through training and technical assistance.
  4. Create systems to monitor program implementation and improve performance.

Implementation: A missing piece of the evidence-based puzzle

Over the past two decades, a growing body of research has focused on the implementation of evidence-based programs. What happens when interventions that have been rigorously tested and found effective in the context of controlled studies are put into practice in real-world settings?2 This research has consistently shown that how these programs are delivered is critically important; those that fail to adhere to their intended design are less likely to achieve predicted outcomes.3 Summarizing the research findings from nearly 500 evaluations of prevention and health promotion programs for children and adolescents, one recent study estimated that interventions that were implemented correctly achieved effects that were two to three times greater than programs where significant problems with implementation were found.4

The state of Washington encountered this dichotomy after investing in four evidence-based interventions focused on reducing recidivism among youth in the juvenile justice system. The state initially funded the programs after an analysis by the Washington State Institute for Public Policy (WSIPP) predicted that they would be highly cost-effective in treating juvenile offenders. After the programs had been in place for several years, the Legislature directed WSIPP to evaluate them to determine if they were achieving the predicted outcomes. The evaluation found that the programs were effectively reducing recidivism in locations where providers followed treatment protocols. In contrast, recidivism had actually increased in locations where providers were failing to adhere to the program models.

For example, the evaluation found that one of the programs, Functional Family Therapy, had reduced recidivism by 38.1 percent and generated benefits of $10.69 in reduced crime costs for each dollar spent on competently implemented treatment. Where treatment protocols were not followed, recidivism increased by 16.7 percent, costing taxpayers $4.18 for each dollar spent.5 Rather than cutting the programs, the Legislature decided to improve implementation and mandated that agencies develop standards and guidelines to ensure that juvenile justice programs were delivered effectively.6 

Why governments struggle with implementation

Governments encounter difficulties for several reasons when overseeing the implementation of evidence-based programs. First, many interventions (especially those that are evidence-based) are complex and involve multiple entities, including government agencies, service providers, and program developers, all of whom must cooperate in service delivery. Also, even the most widely used evidence-based programs are intended to serve only specified populations, at recommended treatment levels, and in supportive environments. Successful implementation cannot be taken for granted and requires significant planning, management support, and leadership at both the system and provider levels.

Second, it can be difficult to deliver services in real-world settings. For example, some evidence-based programs may specify that services are to be conducted only by certified nurses or therapists, yet personnel with these qualifications may be difficult to hire in areas where clients live. Often it can be unclear which aspects of an evidence-based program can be modified to meet the needs of particular communities and populations while still producing predicted results. Evidence-based programs often provide only limited guidance on these questions, leaving program managers to balance fidelity7 to program design with the practical challenges they encounter in their communities.8 Understanding what adaptations can be made—and when such changes may affect outcomes—can make the difference between a successful program and one that is ineffective or even harmful.9

Finally, securing policymaker support for investments in program implementation can be challenging. Funding for staffing, training, technical assistance, and monitoring and reporting systems is often among the first items cut under budget reductions in order to preserve direct services to clients, if these measures are funded at all. However, without these investments in capacity, governments risk much greater spending on programs that may fail to achieve intended outcomes if ineffectively delivered.

How government can support effective program implementation

Governments play several critical roles in program implementation. These include establishing procedures for how programs are selected, creating a management infrastructure that enables effective implementation, supporting program providers through training and technical assistance, and developing systems that track implementation and outcomes and support ongoing quality improvement. Fortunately, government agencies do not need to carry out these tasks on their own but can use the expertise of partners, including universities, provider organizations, program developers, and technical assistance intermediaries.

Key steps for supporting effective program implementation

State and local governments can take four key steps to strengthen implementation of evidence-based programs.

Step 1: Require agencies to assess community needs and identify appropriate evidence-based interventions.

Before a program is implemented, governments should ensure that the intervention is a good fit for the problem being addressed. They should carefully assess the community needs and identify evidence-based programs that have been shown to be effective in achieving the desired outcomes in similar contexts.

Conduct needs assessments to understand problems and service gaps. It is important for key stakeholders to develop a shared understanding of the specific problems facing communities, such as gaps in currently available services.10 The choice of which programs to implement should be based on a clear vision of the desired outcomes and the underlying causes of the problems, which can vary from one community to the next. To reach this understanding, governments should conduct a formal needs assessment that gathers data about target populations, the prevalence of key problems, and the risk factors that could be addressed through interventions.

Governments can use one of several national models when conducting these assessments. For example, the Communities That Care (CTC) model, endorsed by the Substance Abuse and Mental Health Services Administration (SAMHSA), provides a framework for identifying youth needs using a school-based survey that collects data on key risk and protective factors among youth in grades 6 through 12. The survey data are then used to pinpoint problem areas that could be addressed by evidence-based programs.11

Pennsylvania adopted the CTC model in the early 1990s and created more than 100 prevention coalitions across the state to identify and prioritize community needs. The coalitions used the CTC data-driven approach to build an understanding of local problems and key risk factors that could be addressed through evidence-based interventions, which informed their strategies for addressing these needs and led to the adoption of over 300 evidence-based program replications across the state.12

Select evidence-based approaches that address identified needs. Once community needs are understood, the next step is to assess and select programs that have been shown to be effective in addressing these problems. Key resources for this assessment are the national research clearinghouses, such as the National Registry of Evidence-Based Programs and Practices operated by SAMHSA, that compile lists of evidence-based programs. These organizations conduct systematic literature reviews, often examining hundreds or thousands of studies, to identify interventions that rigorous evaluations have shown to be effective in achieving outcomes such as higher graduation rates and reduced criminal reoffending. Each clearinghouse typically addresses one or two policy areas, such as criminal and juvenile justice, child welfare, mental health, education, and substance abuse.13

Some policymakers mandate that interventions be selected from those listed by the clearinghouses. For example, the Utah Division of Substance Abuse and Mental Health requires that its funding be used to implement evidence-based programs listed by designated national clearinghouses, including Blueprints for Healthy Youth Development, the Department of Juvenile Justice and Delinquency Prevention Model Programs Guide, and the Communities That Care Prevention Strategies Guide.14

Results First Clearinghouse Database

To help policymakers identify evidence-based programs and make data-driven decisions, the Results First Clearinghouse Database* provides centralized access to the evidence ratings compiled by eight national research clearinghouses. This online database and accompanying user’s guide provides an easy way to find information on the effectiveness of over 1,200 interventions across multiple policy areas.

*For more information, please see: http://www.pewtrusts.org/en/research-and-analysis/issue-briefs/2014/09/resultsfirst-clearinghouse-database.

Step 2: Create policies and processes that support effective implementation and monitoring.

To scale up evidence-based programs, governments must develop the management infrastructure needed to facilitate effective program implementation. This includes creating standards or guidelines embedding these standards into contracts, and aligning them with administrative policies and processes to support effective implementation.

Develop implementation standards. Policymakers can establish common standards or guidelines for program implementation to ensure that providers meet a minimum level of competency in delivering services. Although some requirements will vary depending on the specific evidence-based intervention, other aspects of program implementation are universal and can be embedded into these standards. These may include minimum requirements for hiring and training staff, providing services to the target population specified by the evidence-based program provider, and ensuring that processes are in place to provide effective oversight of service delivery.

For example, leaders in Washington state developed standards to implement evidence-based juvenile justice programs after an evaluation found that sites where programs were not implemented with fidelity had poor results. These standards govern four key elements of quality assurance—program oversight, provider development and evaluation, corrective action, and ongoing outcome evaluation—and include protocols for hiring, staff training and assessment, and management and oversight of service delivery. Providers are required to complete an initial probationary period during which they receive training and feedback, and are then periodically evaluated. These implementation standards helped the state achieve greater reductions in crime and juvenile arrest rates, compared with the national average, and a decrease of more than 50 percent in the number of youth held in state institutions.15

Embed implementation standards or requirements into contracts. Agencies can build these standards into contracts to ensure that providers meet the required baseline levels of proficiency. In 2013, for example, New York state’s Division of Criminal Justice Services (DCJS) issued a request for proposal for alternatives to incarceration that required providers to identify the specific evidence-based interventions they planned to implement, provide detail on their screening and referral systems, and describe how they would adhere to the programs’ treatment protocols. The division now monitors providers’ fidelity to these requirements as part of a comprehensive process through which providers submit case-specific data to DCJS and undergo on-site reviews by third-party monitors contracted by the state. The reviews assess the degree to which programs are implementing principles of effective correctional interventions.16

Some agencies have embedded requirements related to implementation fidelity in their provider guidelines, which often cover a broad range of contracted services. In 2014, New York’s Office of Alcoholism and Substance Abuse Services updated its provider guidelines for prevention services, defining the strategies and activities necessary to reduce underage drinking, alcohol misuse and abuse, illegal drug abuse, medication misuse, and problem gambling.17 The guidelines require providers to implement programs with fidelity to the “core elements” of evidence-based services, including the target population, setting, and curricula content.

Align administrative policies and processes to support effective implementation. Implementing evidence-based programs often requires changes throughout service delivery networks. Existing administrative processes should be aligned with these delivery efforts. Otherwise, agencies and providers can face conflicting mandates or inflexible payment systems that make it difficult to effectively deliver critical services. Policymakers and agency leaders can help by creating feedback loops that enable administrators, providers, and technical assistance staff to regularly share information and solve unanticipated problems.18

For example, the Colorado Department of Corrections recently adopted a new integrated case management system to improve its planning and offender treatment services. In doing so, the department found that certain policies were not in alignment with the research on what works regarding low-risk offenders. Specifically, the research indicated that less contact with low-risk offenders leads to better outcomes. The department resolved the issue by facilitating changes to administrative regulation standards regarding the frequency of contact to better align policies and practices with the literature.

Step 3: Support service providers and staff through training and technical assistance.

Training and technical assistance are critical to implementing new interventions and practices. Program staff need to be trained on specific treatment protocols. Research shows that such training is most effective when delivered in multiple stages, including initial learning sessions followed by observation and feedback by experts, with subsequent ongoing in-service training and coaching once the program is up and running.19 Policymakers can support this process by funding and establishing systems that train staff on the delivery of evidence-based programs and practices; agency leaders can choose between several options for the delivery of this training.

It is particularly important for program administrators to ensure that staff are appropriately trained to use screening and assessment tools designed to help match participants with the appropriate interventions. Even the most widely replicated evidence-based programs are effective only when treating certain populations. Without the appropriate screening and assessment tools, agencies may refer participants to programs they do not need and that are not effective in addressing their problems. “We often hear frustration from agencies who tried evidence-based programs but still didn’t achieve the outcomes they sought because the programs weren’t delivered to the right population,” said Ilene Berman, senior associate with the Annie E. Casey Foundation’s Evidence-Based Practice Group.20

Determine the best vehicle for delivering training and technical assistance. Governments have several options for delivering training on evidence-based programs, such as using in-house personnel with expertise in these programs, contracting with program developers, or partnering with intermediary organizations. Some widely adopted programs, such as Multisystemic Therapy and Nurse-Family Partnership, offer training services to governments that implement them. Such program developers have deep expertise in their interventions and often have detailed training curricula. However, relying exclusively on program developers can limit an organization’s ability to develop its own expertise and may complicate training if agencies are implementing multiple evidence-based programs. Other options include leveraging the expertise of local researchers through a government-university partnership, or developing an evidence-based unit within a government agency.

Partner with a research university

Several states—including Maryland, Pennsylvania, and Washington—have established partnerships with research universities to provide training and technical assistance to staff and providers. These implementation centers can help support community readiness assessments, provide training and technical assistance on evidence-based programs, and oversee monitoring and quality improvement efforts.21 For example, the Institute for Innovation and Implementation at the University of Maryland was established in 2005 and is funded to provide training, implementation support, and evaluation services for select evidence-based programs across multiple policy areas, including juvenile justice and child welfare. The institute also provides technical assistance and project management support to state agencies engaged in statewide initiatives. “It’s important to have multiple state and local agencies on board as well as the provider community. … Collaboration across agencies is important in order to coordinate existing efforts, develop new strategies, and make sure that everyone is getting the same information,” said Jennifer Mettrick, director of implementation services at the institute.22

In Pennsylvania, the Evidence-based Prevention and Intervention Support Center, or EPISCenter, provides technical assistance to communities and service providers to support the implementation of evidence-based prevention and intervention programs. Since 2008, the center—a partnership between the state Commission on Crime and Delinquency and Pennsylvania State University, with funding from the commission and from the state Department of Human Services—has assisted in the establishment of nearly 300 evidence-based program replications in more than 120 communities throughout the state. Experts from the center provide technical assistance to local staff on implementation, evaluation, and sustainability, and help develop the infrastructure to monitor the program for fidelity to its original design.

Develop an evidence-based unit or division

Another option used by some states and localities is to establish specialized units within agencies that are charged with providing training as well as overseeing program implementation. These partnerships and units can help governments develop in-house expertise for a range of programs.

For example, Colorado’s Evidence-Based Practices Implementation for Capacity (EPIC) Resource Center is a collaborative effort of five agencies working in the state’s adult and juvenile justice systems. The center was created by the Colorado Commission on Criminal and Juvenile Justice in 2009 and formalized through legislation in 2013.23 Housed in the Division of Criminal Justice within the Department of Public Safety, the nine-person staff provides assistance to support effective implementation of evidence-based practices.

We live by the motto that if you just do an evidence-based program and don’t pay attention to implementation strategies, you’re not going to get the results you want.Diane Pasini-Hill, manager, Evidence-Based Practices Implementation for Capacity Resource Center

The center is helping to build the capacity of organizations and to support effective program implementation. “We initially talked to implementation experts when first designing the program, and they told us that you can’t simply train people on evidence-based programs to get a practice integrated; you really need to go deeper,” said Diane Pasini-Hill, the center’s manager. As a result, “we’ve transitioned into being more of a full implementation center as opposed to just coaching and training alone. Through working primarily with line staff and supervisors, we found that there were too many gaps to make this [training] effective on its own. We live by the motto that if you just do an evidence-based program and don’t pay attention to implementation strategies, you’re not going to get the results you want.”24

The Role of Implementation Teams

Regardless of which option is selected, governments should clarify the role of each partner involved in implementing a new program or initiative, including program administrators, service providers, and intermediaries, to help minimize challenges. One common strategy that has proved effective in scaling up evidence-based programs, particularly in K-12 education, is the use of implementation teams, which typically include partners both inside and outside government. These teams play an important role throughout the process, helping to build buy-in for the initiative, create an infrastructure to support implementation, monitor program fidelity, assess outcomes, and solve problems by bridging the divide between policymakers and practitioners.25 

“We act as a neutral facilitator,” said Matthew Billings, project manager with the Providence Children and Youth Cabinet, who leads implementation teams to scale up three evidence- based programs in the Rhode Island capital. “At the community level, there is often a lot of confusion about what evidence-based programs are and what aspects of the program can be tailored to meet the needs of the population we’re serving. We work with providers to gather their feedback on what’s working and what’s not. Then we can take that information to program developers and ask them ... can these changes be made? Sometimes adaptations can be made and sometimes they can’t. But it’s very powerful for providers when they see that their feedback is being taken seriously.”26

Step 4: Create systems to monitor program implementation and improve performance.

The final key step for governments seeking to successfully implement evidence-based programs is to fund and establish systems that regularly monitor providers to make sure they are delivering interventions with fidelity. This monitoring can then create feedback loops that use data to track outcomes and continuously improve performance.

Regularly monitor programs to ensure fidelity. As discussed, research has shown that evidence-based programs in many policy areas, including substance abuse prevention, education, criminal justice, and mental health, must be appropriately implemented in order to achieve their desired outcomes.27 Program managers have several tools for monitoring program fidelity. For example, they can use fidelity checklists and recorded observations to assess the extent to which providers adhere to key elements of evidence-based practices.

Recently, tools have been developed that aim to streamline monitoring efforts by allowing agencies to assess fidelity across multiple programs. In Washington state, the Evidence-Based Practice Institute (EBPI) is developing a standardized process to monitor program implementation and fidelity across four extensively used evidence-based child welfare programs. The institute was established in 2008 to help scale up evidence-based practices available to children and youth served by the state’s mental health, juvenile justice, and child welfare systems.

Its monitoring tools were developed in partnership with the Children’s Administration, a division of the state’s Department of Social and Health Services.

“We had an observation that there are a handful of evidence-based programs [being widely used in the state] with fidelity, training, and supervision,” said Eric Bruns, co-director of EBPI. “We ended up focusing on four programs and looking at the different requirements across them and trying to figure out how we can have some uniformity [in implementation], given there were specific program differences.”28 The institute is evaluating the standardized process to determine whether it can be expanded to measure program fidelity across additional programs.

Use monitoring tools to identify and address gaps in organizational capacity. Programs often fail to achieve expected results because the organizations delivering the services lack the capacity to perform critical tasks.29 For example, many evidence-based programs have strict treatment protocols, which include staff qualifications (e.g., the Nurse-Family Partnership program specifies that registered nurses deliver services), service levels and duration, and staff-to-client ratios. Leadership commitment to delivering these programs with fidelity is also important, as are well-functioning administrative processes such as training, monitoring, and data collection protocols.30

Agencies should ensure that providers have the demonstrated ability to meet the requirements and can use assessment tools to identify gaps in organizational capacity, and target training and assistance to address these needs.31 For example, many state and local governments use rating tools that assess both service quality and the capacity for organizations to effectively deliver early childhood education services. In some cases, organizations that receive higher scores are eligible to receive higher rates, based on the assumption that they will be more likely to achieve good outcomes for the children they serve. Similarly, tools such as the Correctional Program Checklist are available to assess providers’ readiness to deliver criminal justice programs and assess both organizational capacity and service quality, considering factors such as leadership, staff qualifications, and quality assurance systems.32

We can’t just tell programs what they’re doing wrong without having resources to help them.Terry Salo, deputy commissioner, New York State Division of Criminal Justice Services

New York state’s Division of Criminal Justice Services (DCJS) is using the Correctional Program Checklist to assess the extent to which service providers are adhering to key principles of evidence-based practice in their corrections and community supervision programs. Designed by researchers at the University of Cincinnati, the tool assesses both an organization’s capacity to deliver effective services, including its leadership and quality of staff, and the content knowledge of staff and management on evidence-based practices. The assessment uses data collected through formal interviews, observation, and document review to help identify strengths as well as areas for improvement. DCJS is using the tool to discover areas where providers may need to make changes or develop additional capacity. The division then provides technical assistance to support them.33

“Our new approach is totally changing what we fund and what we know about programs,” said Terry Salo, deputy commissioner of DCJS. “There is not a week that goes by where something doesn’t surface [through monitoring] where we learn about the program and [are able to use that data to] engage in course corrections. We can’t just tell programs what they’re doing wrong without having resources to help them.”34

Create a feedback loop that supports program improvement. A critical component of effective implementation is a strong feedback loop in which service providers, government agency staff, and program developers regularly share implementation data, identify areas for improvement, and act on information to improve service delivery.35 These feedback loops work in two directions: program providers collect data to measure implementation progress and then share the information with agency managers and policymakers, who in turn use the data to make needed adjustments in policies and administrative practices to better support organizations involved in service delivery. Studies have shown that efforts to scale up and sustain evidence-based programs have been largely successful when these practice-to-policy links are well established, while the opposite is true when these links are weak or nonexistent.36

These feedback loops can be supported by intermediary organizations. For example, in Pennsylvania, the EPISCenter serves as a liaison for the providers delivering evidence-based services, the agencies charged with overseeing these services, and researchers and program developers who identify key implementation requirements. The center’s roles include interpreting information on effectiveness for agencies and providers; helping providers identify and collect outcome and implementation data, and report them to oversight agencies; and working with the agencies to help align their policies to resolve problems and facilitate successful program implementation.

“The focus of implementation monitoring needs to be on quality improvement rather than simply contract compliance. … Otherwise, the organizations delivering the programs won’t want to share data or be open about any of the problems they’re experiencing,” said Brian Bumbarger, founding director of the EPISCenter. “If [implementation monitoring] is all driven by the organization doing the contracting [e.g., state or local government], there are incentives for providers to try to minimize or downplay implementation challenges, or just give the funding agency the compliance data they want without really thinking about how it’s helping them improve their services. It has to be a partnership rather than a one-sided transactional relationship.”37

Use monitoring data to adapt interventions to fit local conditions. The need to adapt programs to real-world settings while maintaining program fidelity continues to be a persistent challenge to scaling up evidence-based interventions. Though a large body of research underscores the importance of program fidelity to achieving intended outcomes, the research also shows that, in order to be sustainable, evidence-based programs may need some adaptations to accommodate issues that arise during implementation, including cultural norms and limitations on the availability of staff time and resources.38 Administrators can work with model program developers to identify which components of an evidence-based program can be modified while still maintaining fidelity, and then provide guidance to service providers and agencies on this issue. When considering which interventions to implement, policymakers should also carefully consider whether the program would be a good fit within certain settings.

In 2009, the Oregon Legislature passed a bill to utilize the nationally recognized Wraparound system of care for emotionally disturbed and mentally ill children, with statewide programs in place by 2015. A fundamental part of Oregon Wraparound is fidelity monitoring, overseen by the Oregon Health Authority. The National Wraparound Initiative has provided assessment tools to ensure that programs remain faithful to its 10 basic principles. However, administrators may adapt other, noncritical aspects of the program to fit local conditions and needs, which can vary across the state. “The goal is to meet communities where they are so that this is sustainable. Whatever you’re building needs to be part of the community you’re working with. You [need to] maintain the fidelity of the model but also make sure that it’s tailored to the community,” said William Baney, former director of the Systems of Care Institute at Portland State University’s Center for Improvement of Child and Family Services, which provides training and systems support to Oregon Wraparound.39

Conclusion

To fully realize the benefits of evidence-based programs, governments must invest in the capacity of systems and provider organizations to implement the programs effectively. Policymakers can support these efforts by providing leadership and, when necessary, redirecting resources to support the training, technical assistance, supervision, and oversight necessary to ensure that programs are delivered effectively and with fidelity to their research design.

This brief is one in a series about the five key components of evidence-based policymaking, as identified in Evidence Based Policymaking: A Guide for Effective Government. The other components are program assessment, budget development, outcome monitoring, and targeted evaluation.

Endnotes

  1. Dean L. Fixsen et al., Implementation Research: A Synthesis of the Literature (Tampa: University of South Florida, Louis de la Parte Florida Mental Health Institute, 2005). http://ctndisseminationlibrary.org/PDF/ nirnmonograph.pdf.
  2. Ibid.
  3. Ibid.
  4. Joseph A. Durlak and Emily P. DuPre, “Implementation Matters: A Review of Research on the Influence of Implementation on Program Outcomes and the Factors Affecting Implementation,” American Journal of Community Psychology 41 (2008): 327–50.
  5. Pew Center on the States and MacArthur Foundation, “Better Programs, Better Results” (2012), http://www.pewtrusts.org/~/ media/ assets/2012/07/26/ pew_results_first_case_study.pdf.
  6. Ibid.
  7. Carol T. Mowbray et al., “Fidelity Criteria: Development, Measurement, and Validation,” American Journal of Evaluation 24, no. 3 (2003): 315–40. “Fidelity” refers to the extent to which delivery of an intervention adheres to the protocol or program model originally developed.
  8. Dean L. Fixsen et al., “Statewide Implementation of Evidence-Based Programs,” Exceptional Children 79, no. 2 (2013): 213–30.
  9. Julia E. Moore, Brian K. Bumbarger, and Brittany Rhoades Cooper, “Examining Adaptations of Evidence-Based Programs in Natural Contexts,” Journal of Primary Prevention 34 (2013): 147–61.
  10. Fixsen et al., Implementation Research; Allison Metz and Leah Bartley, “Active Implementation Frameworks for Program Success: How to Use Implementation Science to Improve Outcomes for Children,” Zero to Three 32, no. 4 (2012): 11–18. http://www.iod.unh.edu/APEX Trainings/Tier 2 Manual/ Additional Reading/4. Implementation% 20article Metz.pdf.
  11. Communities That Care is an evidence-based model for delivering prevention services that was developed by J. David Hawkins and Richard F. Catalano. For more information, visit http://www.communitiesthatcare.net.
  12. Brian K. Bumbarger, director, EPISCenter, email message, Nov. 25, 2015.
  13. There are several widely recognized national research clearinghouses, including the U.S. Department of Education’s What Works Clearinghouse, the U.S. Department of Justice’s CrimeSolutions.gov, Blueprints for Healthy Youth Development, the Substance Abuse and Mental Health Services Administration’s National Registry of Evidence-Based Programs and Practices, the California Evidence-Based Clearinghouse for Child Welfare, What Works in Reentry Clearinghouse, and the Coalition for Evidence-Based Policy.
  14. Utah Division of Substance Abuse and Mental Health, Division Directives—Fiscal Year 2014 (March 2013), http://dsamh.utah.gov/pdf/ contracts_and_monitoring/ Divison Directives _FY2014 FINAL.pdf.
  15. Pew Center on the States and MacArthur Foundation, “Better Programs, Better Results.”
  16. Leigh Bates, principal program research specialist, New York State Division of Criminal Justice Services, interviewed July 13, 2015.
  17. New York State Office of Alcoholism and Substance Abuse Services, Addiction Services for Prevention, Treatment, Recovery, 2014 Prevention Guidelines for OASAS Funded and/or Certified Prevention Services, http://www.oasas.ny.gov/prevention/ documents/ 2014PreventionGuidelines.pdf.
  18. Lauren H. Supplee and Allison Metz, “Opportunities and Challenges in Evidence-Based Social Policy,” Sharing Child and Youth Development Knowledge Social Policy Report 28, no. 4 (2015). http://www.srcd.org/sites/default/files/documents/spr_28_4.pdf.
  19. Justice Research and Statistics Association, Implementing Evidence-Based Practices (2014), http://www.jrsa.org/projects/ ebp_briefing_ paper2.pdf.
  20. Ilene Berman, senior associate, Annie E. Casey Foundation’s Evidence-Based Practice Group, email message, Jan. 13, 2016.
  21. Jennifer Mettrick et al., “Building Cross-System Implementation Centers: A Roadmap for State and Local Child- and Family-Serving Agencies in Developing Centers of Excellence (COE)” (2015), Institute for Innovation and Implementation, University of Maryland, https://theinstitute.umaryland.edu/ newsletter/articles/bcsic.pdf.
  22. Jennifer Mettrick, director of implementation services, Institute for Innovation and Implementation at the University of Maryland, interviewed April 14, 2015.
  23. See Colorado House Bill 13-1129, https://cdpsdocs.state.co.us/epic/ EpicWebsite/HomePage/HB13-1129.pdf.
  24. Diane Pasini-Hill, manager, Evidence-Based Practices Implementation for Capacity (EPIC) Resource Center, interviewed Aug. 17, 2015.
  25. Metz and Bartley, “Active Implementation.”
  26. Matthew Billings, project manager, Providence Children and Youth Cabinet, interviewed Dec. 4, 2015.
  27. Linda Dusenbury et al., “A Review of Research on Fidelity of Implementation: Implications for Drug Abuse Prevention in School Settings,” Health Education Research 18, no. 2 (2003); Fixsen et al., “Statewide Implementation.”
  28. Eric Bruns, co-director, Evidence Based Practice Institute (EBPI), University of Washington, interviewed June 23, 2015.
  29. Supplee and Metz, “Opportunities and Challenges”; Michael Hurlburt et al., “Interagency Collaborative Team Model for Capacity Building to Scale Up Evidence-Based Practice,” Children and Youth Services Review 39 (2014): 160–68.
  30. Deborah Daro, Replicating Evidence-Based Home Visiting Models: A Framework for Assessing Fidelity (2010). http://www.mathematica-mpr. com/~/media/publications/PDFs/earlychildhood/ EBHV_brief3.pdf.
  31. Ibid.
  32. University of Cincinnati, “Summary of the Evidence-Based Correctional Program Checklist” (2008), https://www.uc.edu/content/dam/ uc/corrections/docs/Training Overviews/ CPC ASSESSMENT DESCRIPTION.pdf.
  33. Terry Salo, deputy commissioner, New York State Division of Criminal Justice Services, interviewed July 13, 2015.
  34. Ibid.
  35. Supplee and Metz, “Opportunities and Challenges.”
  36. Fixsen et al., “Statewide Implementation.”
  37. Brian K. Bumbarger, director, EPISCenter, interviewed March 19, 2015.
  38. Dusenbury et al., “A Review”; Fixsen et al., “Statewide Implementation.”
  39. William Baney, former director, Systems of Care Institute, Portland State University, interviewed March 17, 2014. 
Report

Evidence-Based Policymaking: A Guide for Effective Government

Quick View
Report

Policymakers face tough budget and policy choices that affect the outcomes they can deliver for citizens. By using rigorous evidence to inform these decisions, policymakers can achieve substantially better results by funding and operating public programs that are proven to work.