Pew’s programmatic work begins with a deep understanding of the problem and a focused strategy to reach solutions, and it concludes with a rigorous analysis of results. The Planning and Evaluation department supports the organization’s efforts by providing thoughtful guidance and critique on initial program design, objective measurement of progress against benchmarks, identification of lessons that can be applied to Pew’s work, and assessment of our ultimate return on investment.
The bronze front doors of the Thomas Jefferson Building of the Library of Congress contain bas-reliefs in the Beaux-Arts style. One door, celebrating knowledge, depicts allegorical figures of Truth and Research. Installed in 1896, it encapsulates the thinking of that era—principles that still ring true today.
The turn of the 20th century opened the possibilities of gaining insight about the world through observation and experimentation, and then applying the knowledge gained to solve theoretical and practical problems. We are familiar with the advances that ensued in the basic sciences of physics, chemistry and biology, and in industry, where applied knowledge spurred changes with far-reaching technological and social consequences.
The fields of planning and evaluation are another legacy of that era, even though they were not formalized or professionalized for many decades afterward. They involve science in the sense of approaching problems or issues systematically to encourage insight. They rely on observation and study, require discipline and rigor, and thrive on clarity of thought, including a firm awareness of what is not known or can only be assumed. Yet there is also room for exercising judgment and intuition in the practice of both—planning and evaluation are part science and part craft.
One misconception about planning is that creative solutions to complex problems can reliably emerge from a structured planning process. Creativity cannot be planned, except perhaps in the sense of giving people the freedom to experiment, innovate and learn from their mistakes and successes. Planning, however, can contribute to problem solving by testing, refining and clarifying ideas; bringing to bear the most relevant prior experience; and providing opportunities to adapt ideas to changing conditions.
As an example, the Planning and Evaluation unit recently engaged with the Pew Environment Group in its efforts to clarify several key crisis areas in ocean conservation and propose possible approaches to solving them. The Environment staff, along with their partners, brought deep field and campaign expertise to bear in diagnosing these problems. Planning and Evaluation contributed a different set of skills.
We urged the Pew Environment Group to examine the root causes of the issues to better ensure that the solutions proposed were relevant and feasible. We promoted discipline in setting and articulating objectives so that these would be as unambiguous as possible. We encouraged clear statements of strategy that outlined how each objective would be pursued. We helped our colleagues identify intermediate milestones so that, in the course of each individual project, we could all gauge progress toward the appropriate objectives.
With a large and complex program like ocean conservation, not every aspect of the strategy was fully envisioned. Recognizing this, we asked staff to determine the steps needed to bring the embryonic aspects of the strategy to maturity. Finally, we shared the insights gained from Pew’s previous experience, including earlier evaluations, concerning the effectiveness of various strategies. In the end, the planning effort contributed to an ambitious and thoughtful strategy for Pew’s marine program for the coming decade.
Eventually, of course, we will evaluate the performance of Pew’s ocean conservation work, bringing the same combination of science and craft that went into the ocean program’s planning. In essence, the planning effort provides the framework we will use later to evaluate the program. Having clarity on a project’s objectives, strategy and milestones allows us to frame thoughtful and incisive evaluation questions.
At the evaluation stage, we team with external consultants to apply multiple methods, drawing from several data sources to search for evidence that responds to our evaluation questions. For each specific program objective, the evaluators collect data on baseline conditions—the state of the issue before the project’s initiation; document any changes observed over time toward our objectives; determine the actions of Pew’s projects and their consequences; and, finally, assess whether these actions can be linked to the observed changes.
There is no one best way to evaluate all programs or strategies, nor do we seek definitive answers to our questions. Thus, evaluation results are typically nuanced (“the strategy performed well under these conditions and faced challenges under others”) rather than unequivocal (“the strategy worked”). In addition to assessing a project’s performance, we also seek to learn from our successes and mistakes, so that the institution as a whole can learn from the initiative.
As an organization driven by the power of knowledge and committed to achieving concrete results, Pew has a threefold goal for planning and evaluation: to strengthen the design and implementation of the program initiatives; to inform critical institutional and programmatic decision making; and to advance our understanding of how we can be effective in our work. While the nature of the issues that Pew is addressing may change over time, we believe the rigor with which we approach them must remain constant.
Lester W. Baxter
Director, Planning and Evaluation