The Promise of Evidence-Based Practice in Behavioral Health Treatment
In an era characterized by a combination of limited resources and demands for greater accountability, health and human service organizations are increasingly required to demonstrate that their services have positive outcomes. These pressures are particularly strong in the behavioral health arena where, thanks to a substantial body of recent research, there is an increasing amount of evidence about effective treatments for many mental and addictive disorders. However, the incorporation of these "evidence-based" approaches into the routine practice of behavioral health providers has proved challenging. Making this change is complex and involves significant adjustments in the priorities of policymakers, as well as in the clinical practice and organizational infrastructures of providers.
This session was designed to help local agencies understand the drive towards evidence-based practices and some of the implementation challenges that providers may face. There were two presentations. John Morris, Director of Human Service Practice for the Technical Assistance Collaborative, a national non-profit organization that provides leadership and consultation on the design, implementation, and financing of best practices in the behavioral health services, described the issues involved in moving science into the "real world" of mental health and substance abuse treatment. Arthur C. Evans, Jr., Director of the Office of Behavioral Health and Mental Retardation Services for the City of Philadelphia's Department of Public Health, then spoke about the ways that evidence-based practices are helping to transform the city's behavioral health system and the implications for local providers. The session concluded with a question-and-answer period.
Making the Transition from Research into Practice
In his presentation, "Science into Practice: The Challenges of Moving Toward Evidence-Based Practices," John Morris said, "We are beginning to develop good science" on effective behavioral health and substance abuse treatments but "it is taking a very long time to make the translation from science into practice." Why does it take so long to move these treatments into the "real world?" These were among the reasons he discussed:
- There is no direct pipeline from the research world to the practice world. The practice world is a very different environment than academia, where research takes place. One difference is between the language of science and the language of practice, making it difficult for researchers and practitioners to communicate effectively and understand one another's points of view. In addition, there are challenges involved in bringing innovations to scale. Something that has been tested in a small demonstration project in one or two locations may not work when implemented widely under a number of different local conditions.
- Four interacting elements complicate the process of bringing evidence-based practices from the laboratory into the real world. They are the realities of the economic environment, the political environment, the scientific environment, and the practice environment. Morris used the concept of "policy pinball" as a framework for describing these dynamics. Someone launches an evidence-based practice into play. How does it score points? It scores economic points if it looks like it will save money. It scores political points if there is general agreement that it deals with a significant public problem, such as over-utilization of high-end services at hospitals, or if a local legislator is a strong backer of the practice. It scores science points if it is clear why the treatment is effective and if the approach is "doable." And it scores practice points if the provider community is frustrated by what is not working and agrees on the need for change. A lot of times, Morris said, the ball is launched and, instead of scoring points, ends up rolling into the gutter.
- Evidence-based practices and outcomes measurement are intertwined, and there are costs for providers in implementing them. Some of the costs can be anticipated--for example, the direct costs involved in tracking outcomes. The expenses associated with collecting and using data can be significant, but the agencies and organizations requiring performance measurement do not always acknowledge the extent of the costs involved in purchasing instruments, contracting with evaluators, and building the technical infrastructure to track performance. Other costs are sometimes not fully anticipated. These include staff time and energy. Clinicians do not necessarily see themselves as data managers, and the extra time they spend on reporting mechanisms, such as filling out forms, can feel intrusive as it takes away from time they could spend providing direct services. In addition, consumers and their families may feel burdened by having to complete data-collection instruments that they do not see as being relevant to them or necessary. Morris described this situation as "the dangers of a zero sum game"--there are only so many hours in a day, and program time spent on data collection can mean that something else does not get done.
- Definitions of "quality" and how it should be measured tend to change fairly regularly. Compounding providers' challenges in identifying and implementing evidence-based practices is the fluidity of the definition of "quality." Who defines it? At one important level, quality should be defined by the children and families that providers serve. However, translating that into policy and practice is more complex because there are many stakeholders: private insurers, government purchasers, and other funders--all of whom have their own definitions about what constitutes quality. In addition, national accrediting bodies have their own ideas about what quality is and how providers should demonstrate it to them, while licensing organizations and professional associations also have their own defined professional standards. Adding to the challenge is the proliferation of measurement tools for collecting data about performance and quality. Morris called this "the Kudzu Phenomenon," after the weed that was originally introduced in the southern U.S. to help stabilize the land but quickly got out of control--it spreads at the rate of a foot a day and currently covers about two million acres in the South. Kudzu, Morris said, "is a science intervention that went awry, and the same may be said for performance and outcome measurement." There is a proliferation of measurement tools, report cards, and indicator sets--public and private, proprietary and free, individual based and population based, consumer oriented and provider oriented, scientifically validated and not. For providers, the result is a lack of clarity about what is available and which instruments are best suited for their purposes.
The Pressure to Implement Evidence-Based Practices
Despite the implementation challenges, evidence-based practices are critical tools for improving the quality of life for people with behavioral and substance abuse conditions, whose recovery can be enhanced by science working on their behalf. As a result, Morris said, the pressure on providers to implement these practices is significant, and there are adjustments they can make to respond to this imperative. These were among his major points:
- A number of forces are driving the movement towards requiring evidence-based practices. Purchasers of health care--including public funders and private insurance providers--are the major force, along with policymakers such as the Substance Abuse and Mental Health Services Administration (SAMHSA) and state mental health authorities. SAMHSA, in fact, has developed five toolkits based on evidence-based practices in the behavioral health field: on illness management and recovery; assertive community treatment; supported employment; family psychoeducation; and integrated dual disorders treatment. In addition, accrediting organizations such as the Joint Commission on Accreditation of Healthcare Organizations (JCAHO) are becoming more involved in pushing for evidence-based practices, as are private foundations and, to a lesser extent, consumers and their families, who want to know more about the services that providers are offering. For providers, the pressure can feel like what Morris called "the Nike Imperative"--"just do it!"
- Different terms are used to describe the practices, based on the scientific validity of the evidence. Morris described four levels of evidence and emphasized that these terms should not be used interchangeably. "Evidence-based practice" is the gold standard; it refers to practices that have been proven effective through rigorous research using randomized studies, in which people are randomly assigned to receive either the treatment being studied, a different treatment, or no treatment. "Best practice" refers to a close fit between those proven practices and the reality of available resources; a provider might not, for example, be able to replicate the staffing practices exactly as they were done in the studies, but it can essentially implement the major components. A "promising practice" has some scientific evidence and there is a strong consensus among experts or consumers that it works; with time and enough resources, it is likely to become a best practice. An "emerging practice" has broad acceptance, but its value has only been demonstrated anecdotally.
- Providers should cultivate evidence-based thinking. "Learn to love data," Morris said. Actively look for outcomes data. There is a lot that providers do in intervening with families that thus far lacks a strong evidence base but is probably good practice. Examine these interventions for evidence; look for ways to quantify their outcomes. If they do not work, stop them. But if they look promising, allow time for them to mature and develop evidence of their effectiveness. And talk about outcomes and performance openly with colleagues, and especially with consumers and their families--let them know that the goal is to help them get where they want to go.
- There are other steps that providers can take to strengthen their own effectiveness and the field as a whole. First, advocate for pre-professional training for staff that is consistent with the kinds of services that are most effective, and similarly advocate for a stronger system of continuing education, which currently relies heavily on one-shot trainings that have no real impact. In addition, pay attention to system redesign issues that support quality: if providers are not reimbursed at a level that allows them to both deliver services and track quality, they are going to choose to deliver the services, and that is a reality which funders have to face. And, importantly, provide better education for consumers and their families so they have real information about the services that are affecting their lives and the quality of their care.
- There are rewards in implementing evidence-based practices. For providers, successfully using evidence-based practices allows them to demonstrate competency and effectiveness, and increases their credibility with the external community. Everyone in publicly-funded services is under scrutiny: implementing proven practices and benchmarking achievement provide a competitive advantage in a challenging fiscal environment. In addition, it can strengthen the client-clinician partnership, as organizations make it clear to clients that there are benefits to the new practices and they can contribute by helping to track the experience.
Transforming Behavioral Health and Mental Retardation Services in Philadelphia
In his presentation, "System Transformation and the Role of Evidence-Based Practices," Arthur C. Evans applied Morris's overview to the specific changes taking place in Philadelphia. He first described the framework guiding this transformation. These were among his major points:
- An expanding research base is showing how to improve the effectiveness of treatments and supports. "The system cannot be constructed on our philosophies of what we think works," Evans said. "It has to be constructed on the data about what does work." Those data sometimes show that traditional concepts have been wrong and, thus, systems of treatment have been misdirected. For example, most mental health professionals were traditionally trained that people with schizophrenia were not going to improve. But more recent research shows that two-thirds of those people can do better--that is, their symptoms will abate--or they will be able to function at a fairly high level despite their symptoms. Those findings make clear that the current service system is not doing what it should because it is a maintenance system that measures success through such indicators as whether people do not get re-hospitalized or do not become imprisoned. Instead, the scientific data indicate that the system should focus on helping people improve and measure success through such indicators as whether they get jobs or become integrated into the community.
- While the City of Philadelphia has a strong behavioral health system, there are also challenges. Evans, who has been Director of the Office of Behavioral Health and Mental Retardation Services for about a year, noted that he was drawn to the job because the city has one of the best-developed systems in the country, with its focus on children's services, innovative practices, and an integrated approach to service delivery. But there are still challenges in the system. There are, for example, gaps in levels of care. When people leave residential treatment, the next level is usually outpatient treatment, and that leap is too large for a lot of them. In addition, there are disparities among different demographic groups in their access to behavioral health services and the effectiveness of the care they receive.
- Three principles are driving the transformation of the behavioral health system in Philadelphia. To strengthen the system's effectiveness, the city is organizing its work around three principles. For adults receiving behavioral health services, the goal is recovery; and for children, it is resiliency. For people receiving mental retardation services, the goal is increased self-determination. It is important, Evans noted, that the principles do not talk about symptoms; they focus on "people being able to have a life." The key questions then become: What helps people recover? What helps children become more resilient? What helps people with mental retardation become able to have more self-determination?
- The city is using a number of strategies to reframe services around these organizing principles. The strategies focus on utilizing data to evaluate and develop services, implementing evidence-based practices, eliminating racial and ethnic behavioral health disparities, promoting a consumer- and family-driven system, and developing new community-based supports and partnerships, such as using faith-based institutions to reach out to people and help them become part of the community. But there are challenges to successfully implementing the strategies. The system has to identify evidence-based models that promote both life skills and symptom reduction and can be adapted to work effectively with a diverse population of clients. In addition, many of the studies of effective practices have been done in settings where the direct service providers have advanced degrees; but in Philadelphia, staff who spend the most time with clients are often the least trained.
The Role of Evidence-Based Practices in Transforming the System
Evans said that implementing evidence-based practices is an essential strategy in transforming the city's system of care into one that gets results for both providers and the people they serve. What are the major challenges that programs face and how can evidence-based practices help address those challenges? For example, one common challenge is the high dropout rates for outpatient treatment: people come for one or two sessions, and then they disappear. The result is that they often have another crisis and require acute services. What proven practices might help address this and other challenges? These were among his major points:
- The concept of "evidence" can be thought of broadly. The city's framework, which is similar but not identical to the one Morris outlined, includes four levels of evidence. "Evidence-based practices" are interventions that have a body of rigorous studies showing strong outcomes. For example, there have been a series of studies demonstrating that supported employment--a practice that helps people find and maintain meaningful jobs--is a highly effective intervention that results in significant gains in employment rates, earned income levels, and employment tenure among individuals with severe behavioral health disorders. However, because very few practices have received this level of rigorous research, the city has adopted an expanded view of "evidence" that includes support for practices in three other categories. "Evidence-supported practices" have demonstrated effectiveness, but through less rigorous evaluations. For example, program evaluations, rather than random assignment studies, might suggest that a practice works to lower the no-show rate for an outpatient service. "Evidence-informed practices" are evidence-based practices that have been modified to meet the needs of a specific population. An example is an approach that has been proven to be effective for doing outreach to Southeast Asians and has been modified for outreach to African-American men. "Evidence-suggested practices" do not yet have strong research proving their effectiveness, but they seem to work, based on experience and agreement among experts.
- All programmatic and policy decisions should be explicitly based on some level of evidence. Evans re-emphasized Morris's point that providers have to begin to ask themselves: "What is the evidence that what I am doing is working, that it is having the impact I want it to have?" This approach requires that organizations develop the infrastructure to implement and evaluate evidence-based programs. They have to be able to identify appropriate practices, train staff, and establish indicators of success--such as increases in employment and community integration, and a decrease in homelessness--and develop a quality management process that uses these indicators as the basis for measuring performance, identifying areas of weakness, and modifying their efforts to strengthen outcomes.
- There are considerations in choosing what evidence-based practices to implement and how to implement them. Evans cautioned providers not to choose an evidence-based practice just to do it. Instead, ask: Is it targeted to the specific problem the organization is trying to address? Is there evidence the practice works with the specific population the program is serving? What are the policy and fiscal considerations necessary for implementation? Can it be realistically implemented with the staff resources available? How much training is required for staff? How accessible is the training? Is it copyrighted, meaning that it is necessary to go through a particular organization to get the training? He also described what he called "the magnitude issue": How can providers get the "biggest bang for their buck?" For example, programs have made small changes--such as phoning consumers to remind them a day in advance of their scheduled appointment--that have significantly decreased the outpatient dropout rate.
- There are limitations to evidence-based practices. Evans said that while randomized studies have been extraordinarily useful, they can limit what researchers are willing to look at because they tend to examine variables that are easily measured and controlled. One of their problems is that they eliminate people who have co-occurring behavioral health and substance abuse conditions, a common challenge in Philadelphia. In addition, highly controlled studies, by their very nature, reduce variability in the treatment setting and among clients, and there can be limitations in how applicable they are across settings and cultural contexts. As a result, there is always a tension between implementing the evidence-based practice with fidelity to the model and having the flexibility to adapt the practice so it works within the specific local conditions where it is being implemented.
- Evidenced-based practice is changing Philadelphia's behavioral health system. The city is taking a number of concrete steps to close the gap between knowledge and practice in the field of behavioral health. Evans' department has established an internal working group that is looking at which evidence-based practices they should implement in their service system, and they want to collaborate with providers in identifying those practices and the modifications needed in city policies so the practices can be adopted. In addition, when organizations apply for funding from the Health Choices program reinvestment pool, they will be required to describe evidence demonstrating that their approach will work. Finally, like all city agencies, the department is moving towards performance-based contracts and requiring increased accountability from providers. At the same time, the city's behavioral health office is looking to help providers implement effective practices through a supportive policy environment, resources and reimbursement, and training opportunities for their staff.
Following the presentations, participants had an opportunity to ask questions and raise concerns. These were among the issues they wanted to learn more about:
- The Blue Ribbon Commission on Children's Behavioral Health: Evans said his office established this new commission for two purposes. First, the commission will develop an organizing framework around the behavioral health service system for children. Second, while a number of groups have come out with recommendations for strengthening the system, the commission will make decisions about which ones should be implemented. The commission has been holding hearings around the city and is scheduled to issue its recommendations in the fall.
- The level of evidence required by the city for approving a particular practice: When organizations are applying for funding from the city, what level of evidence is required to demonstrate that an approach will work? Evans said there are some approaches providers are using, like sheltered workshops, that his office knows do not result in positive outcomes. There are no data supporting their continuation, so even if an organization said they had a different method of doing them, Evans would say to move to a supported employment model because there is strong evidence that the strategy works. Beyond that situation, however, there is a balance between the city saying, "Here are some strategies or practices that we know work," and providers saying, "We want to try something else." If the provider can provide evidence that its approach works, the city will listen.
- The challenge of identifying what to include in performance indicators: There sometimes seems to be a tension between the performance indicators that organizations are required to measure and the real focus of their programs. For example, when the focus is on recovery, one key indicator would seem to be consumer satisfaction, but that is difficult to measure in an objective way. Morris said one problem is that most of the outcomes that have been well researched, thus far, are system-level outcomes, such as reductions in hospitalization and use of residential care. To a certain extent, these do track the values that consumers have. But at the same time, one of the issues for the field is what a profound shift a focus on recovery is--it is a very individually-experienced outcome--and designing an approach to system-level data collection around individual recovery and consumers' achieving their own goals has been challenging.
- The gap between policy and evidence-based practice: Several people noted that policy does not always support practices that have been demonstrated to be effective and that programs are sometimes judged by criteria that are not necessarily consistent with good practice. When that happens, and when monitoring visits focus on less important parts of a program and ignore what is innovative, it can create cynicism on the part of program staff. Evans agreed that policy is often made without concern for evidence-what he termed "a lack of evidence-based policymaking." He said his office is trying to "walk the talk" and make policy that is consistent with strong practices, while also making internal adjustments in how they monitor programs and in training their own staff to support the larger shift to an evidence-based approach.