Framing the Big Picture (Spring/Summer 2002 Trust Magazine article)

Many of the problems that the Trusts address through our grantmaking are sufficiently complex that we approach them not grant-by-grant but through a portfolio of grants designed to tackle an issue from multiple angles. If we are looking to address a problem of our public health system, for example, the portfolio might contain a variety of efforts that include support for research, public education and advocacy.

Our board expects program staff to invest in areas that can produce specific and measurable results within three to five years. The overall problem may not be solved in that time (for instance, protecting old-growth forests and wilderness areas on public lands), but there must be clear progress in reaching the milestones we have set for ourselves.

To help us understand our return on investment, we undertake a “cluster review” of our work at the end of this three-to-five-year period, when the grantmaking portfolio has reached maturity. The cluster review focuses on the effectiveness of the strategy over time, as collectively implemented by all of the relevant grantees, and helps determine if we need to reassess the portfolio's initial intent.

In short, cluster reviews help us understand the progress on a problem or issue the Trusts are addressing, and the Trusts' contribution to that progress. Unlike evaluations of individual grants, which tell us about that effort alone, cluster reviews allow us to assess the effect of all the grants in the entire portfolio. This broader perspective allows us to take stock of our performance as a charitable foundation. How close have we come to our intended goal? Does the problem that inspired our original effort continue to be ripe for investment? How well did we work with grantees and with other funders?

Cluster reviews provide direction for the future: Does the strategy we employed remain the most effective way to get at this issue or problem? Or should we shift direction, withdraw from certain lines of work and expand into others that could be more promising? Are there better ways to accomplish the same ends?

By yielding information on what works and what doesn't in a variety of situations, cluster reviews provide lessons of broader interest to help us become a stronger organization and refine our internal thinking on strategic philanthropy. This information is integrated into the Trusts' ongoing work in the form of learning tools, professional development courses and informal discussions to strengthen the grantmaking process.

We have been conducting these reviews for 10 years, covering all of the grant portfolios in each of our program areas. Last year, the Planning and Evaluation department looked for broader institutional lessons from these reviews. We reread and analyzed past cluster review reports to determine what themes had emerged across programs over time--for instance, did some of the later portfolios reveal mistakes that previous work should have taught us to avoid? Here are some of the lessons we found:

  • It is better to have a measurable influence on a smaller problem than an indiscernible effect on a larger one. Over the years, the findings from these reviews have confirmed that it is important that we set ambitious but realistic goals, keeping in mind the limited resources we can bring to bear. In addition to a rigorous internal review process, we now use external reviews to ensure that our goals are attainable and that the Trusts' support can realistically make a difference.

This external review process provides on opportunity for frank feedback on such questions as: Are we proposing to use our resources as effectively as possible? Is the scale of our funding appropriate to the problem?

If a problem requires more resources than we have ourselves or are able to coordinate with peers, we move on. 

  • Strategy is built on a set of assumptions about how the world works--and about how our investment will help change occur. It is worth reexamining these premises early in a strategy's life cycle, because the ability to make midcourse corrections ultimately saves valuable time and resources.

One lesson that we have learned is to tailor the analysis to the need at hand. Testing important but narrow assumptions of a strategy need not involve large-scale evaluations. We are currently experimenting with smaller-scale analyses that provide insight when it is most needed, not months after the fact.

We want to further refine our techniques for testing key assumptions, even in advance of cluster reviews or other formal evaluations. 

  • Never assume that the world will continue to work in the future much as it works today. Foundations send their programs out into a social environment over which they have little or no control. The review underscored the point that, to achieve success, it is critical to adapt to changing circumstances and even to anticipate them, when possible.

    Accordingly, for our strategies to be successful, we must routinely scan the horizon for critical shifts in the policy landscape and be nimble enough to respond in a timely fashion. For that reason, program staff will continue to seek feedback from the field on the continued viability of particular strategies. We must always be mindful that changes in the world may thwart a well-designed strategy, necessitating appropriate midcourse strategy adjustments.

  • Keep your eyes on the prize. On particularly resistant issues, it is easy to mistake a means for an end, especially over time. Our grantmaking tends to focus on near-term progress: If we see that the lack of information or the absence of policy is holding back progress on an issue, our approach is to fill that void with objective, fact-based information or policy analysis in order to advance public debate. But we do not consider ourselves successful simply because a public debate helps make policymakers or the public aware of an issue. Until our work contributes to meaningful solutions to the problem--for instance, campaign finance reform--we do not consider the strategy a success.

Overall, this exercise of examining past cluster reviews demonstrated the value of “evaluating our evaluations.” It taught us ways to refine the cluster review process, so that we may sharpen our analyses of returns on investment, more clearly identify cross-programmatic lessons and more broadly extend internal dissemination of the key findings.

Page Snow was chief officer for institutional planning when this article was written. Les Baxter is chief officer, evaluation at the Trusts. To understand the cluster review in the context of the Trusts' total planning and evaluation work, see Returning Results: Planning and Evaluation at The Pew Charitable Trusts.