Evaluation at The Pew Charitable Trusts (Fall 2006)

Evaluation at The Pew Charitable Trusts (Fall 2006)

Achieving results is central to the mission of the Trusts.  Evaluation contributes to this aim by enabling us to gauge the return on the Trusts' investments, test the effectiveness of specific strategies, and offer an informed perspective for adapting programs to upcoming opportunities.

In 1988, the Trusts established an internal department called Research and Evaluation.  In the department's early years, evaluations served two main purposes: to ensure grantee accountability to the foundation and to facilitate grant renewal decisions. As the financial oversight of grants shifted to a growing Grants Administration department and grant monitoring duties shifted to the program areas, the Research and Evaluation department began to look at issues of effectiveness and organizational learning.

In 1992, the evaluation department commissioned its first review of an entire cluster, or program (a set of projects that are pursuing a collective goal), of grants.  Evaluators hired to conduct those early cluster reviews suggested that if programs were more tightly focused their effect would be greater. The Trusts realized that such focus could not be accomplished simply through better evaluations, but rather by integrating the lessons learned from evaluations into program planning, sharpening programmatic focus from the start. Taking to heart such lessons, the evaluation unit evolved into what is today Planning and Evaluation.

Over time, this integration of evaluation with planning was explicitly linked to the Trusts' core activity:  developing and implementing programs that lead to social benefits.   This linkage occurs through a process we call the internal strategy cycle (depicted below), which has three major stages.  The first, strategy development, involves creating a coherent and convincing plan, with feasible and measurable objectives, to address a specific problem. The second, implementation, entails turning the plan into action with our partners, carefully monitoring progress, and adjusting the plan as necessary. The final stage begins with a rigorous and independent evaluation of the overall strategy. Program staff then integrates the findings from this evaluation into a revised plan, triggering a new round of the internal strategy cycle. The entire cycle, from strategy development to cluster review, can take three to five years, or longer.

Click to enlarge chart (opens as a pdf)

 

The internal strategy cycle starts with a desire to seize an opportunity or respond to a problem or issue (such as declining voter turnout among young people). Program staff then designs and presents to our board a strategy outlining a potential role for the Trusts. Two key steps in selecting an issue are understanding the root causes of a problem and then determining whether feasible approaches are available to address them.  Moving too quickly to a solution is perhaps the most common mistake in program design; it usually means that we have not developed a complete sense of the causes or failed to fully review all of the options to tackle them. To help us avoid this error, program staff at the Trusts engages with colleagues, informally in brainstorming sessions and formally in internal peer-review meetings, to examine a problem from as many sides as possible. These teams benefit from the knowledge and counsel of outside experts as well as that of the Trusts' leadership.

If the proposed strategy is approved, program staff works with external partners to develop a coherent portfolio of projects to carry it out. As the strategy is implemented and projects are launched, program and evaluation staff develops a monitoring plan to provide staff the information needed to make good management decisions. When these observations raise issues for the program staff, a focused evaluation answers the question “why is this happening?” (Suppose that monitoring reveals an increase in voter turnout among college students. An evaluation could help explain whether turnout is increasing because voter registration campaigns are working, a particular issue is galvanizing the student community, or a close election has increased voter turnout generally.)  Every year, program staff reports their progress and any necessary strategy adjustments to the board. After three to five years, a cluster review takes a look back at the effectiveness of the strategy and the lessons we learned from the experience, as well as what those lessons and changes in the field might mean for the Trusts' future investments in that area. Knowledge gained from the development, implementation, refinement and evaluation of the strategy then informs the decisions of program staff, the Trusts' management and the board going forward.

As with any approach, the process described above has both costs and benefits. On the cost side, it entails a close collaboration between program staff and evaluation staff at many points—and we do not pretend that negotiating those relationships is easy. The Trusts' approach is also undeniably resource-intensive, demanding both human and financial investment. But among the many benefits, this approach calls us to be accountable for results and for learning from our mistakes as well as our successes. It ensures that we do not ignore initiatives and strategies once they are launched, but continue to question our assumptions about the process of change, and gives us the room to make corrections when we find that we were wrong. It lays out a framework to help us target our resources to the places where they can have a tangible effect.

We do not pretend that this is the only approach to pursuing social benefits (or evaluation) or the best. We do believe, however, that this strategic approach has yielded stronger and more consistent results from the Trusts' programs.   The above approach has helped us measure our success while improving the ability to develop and manage programs, acknowledge limitations and ultimately become more effective. These practices cannot guarantee success but we have abundant evidence that they improve the odds.

Lester W. Baxter
Planning and Evaluation, The Pew Charitable Trusts

The Pew Charitable Trusts, an independent nonprofit, serves the public interest by providing information, advancing policy solutions and supporting civic life. Based in Philadelphia, with an office in Washington, D.C., the Trusts will invest $248 million in fiscal year 2007 to provide organizations and citizens with fact-based research and practical solutions for challenging issues.