Report

Report Cards as a Grantmaking Tool (Fall 2006 Trust Magazine article)

We all can remember the report cards we received when we were in school. We relished the good individual letter grades and took exception to the bad ones; in either case, they prompted many of us to do better in class, especially if they were accompanied by guidance for improvement.

The Trusts uses report cards for much the same reasons and in much the same way but on a larger scale, grading organizations in a particular sector or even states on certain performance measures. During the past decade, Trusts-supported projects have applied this tool in a range of fields, including education, environment, health and human services, public policy and journalism.

Always within the context of a broader strategy, report cards assess performance on specified measures by using an easy-to-understand grading system of A, B, C, D or F. The underlying premise of a report card is that collecting and widely disseminating information on performance will provide incentives for better operations, reform or restructuring. Over time, the Trusts-supported report cards have had various purposes and impacts and provided program staff with lessons learned along the way.

Purpose

Report cards may be issued at different stages of an issue's life cycle, ranging from issue identification to implementation of an agreed-upon course of action. The goals prompting the use of report cards reflect this diversity; report cards can:

  • Draw attention to an issue. When combined with effective communications efforts, report cards can generate interest in a particular issue and spur action among the organizations being graded (e.g., public education systems), regulators and funders (e.g., state agencies), consumers (e.g., college students) and the media.  
  • Serve as an objective, credible source for evaluating performance. Through a technically rigorous and broad consultation process, report card developers create performance standards that organizations—and their customers—can use to assess performance and track progress toward improvement. In this way, report cards also encourage accountability in the organizations being graded.  
  • Highlight models of success. Organizations receiving good grades can represent models of successful practices for others to draw upon. In this way, many report cards help mark the road toward improvement for low scorers by drawing attention to best practices.  
  • Provide motivation for desired improvements. Report card grades allow for direct comparisons among organizations and play to competitiveness among organizations within a particular sector. Their public nature can motivate poorly performing organizations to improve.

    Impact

    Report cards can have many kinds of impact. For example, if the goal is to draw attention to an issue, then one might expect to see that the report card is picked up by the press and that the report card's target audiences are engaging in new conversations. If the goal is to spur action, then one might look for broader use of the report card's findings by advocacy groups, public officials and other agents, and eventually, a change in the entities being graded. Various effects that some of the Trusts' report cards have realized are:

  • Media attention. In 2005, the Government Performance Project, which publishes report cards on state management capacity by evaluating state performance in the areas of finances, human capital, infrastructure and information, was covered in 45 states in approximately 300 newspaper and broadcast outlets.

    The 10th annual edition of Quality Counts, the state report card for K-12 education, published in Education Week by Editorial Projects in Education in early 2006, generated more than 700 media stories.

  • New conversations. The Trusts-supported National Center for Public Policy and Higher Education biannually publishes Measuring Up, which grades each of the 50 states on how well their higher education systems perform in areas such as preparation, affordability, accessibility and degree completion. After the release of Measuring Up in 2004, the center's staff provided assistance to seven states that sought guidance in interpreting their particular results and assessing implications for state policy. The center expects to undertake the same kind of outreach following the edition released this fall.

    Similarly, after the Government Performance Project released its report card last year, its staff worked with officials in 10 states, providing detailed presentations to help them understand their grades and learn about best practices working elsewhere.

  • Fostering change. Advocacy groups, public officials and other change agents sometimes reference report cards to further their cause. The Trusts-supported National Institute for Early Education Research, based at Rutgers University, publishes an annual yearbook of state prekindergarten policy that assesses how well states are serving their three- and four-year-olds. In 2005, the institute's assessment of Arkansas's preschool program as one of the best in the country helped motivate the state to support a substantial expansion.

    In late 1999, the now-closed Pew Environmental Health Commission released a report, “Healthy from the Start: Why America Needs a Better System to Track and Understand Birth Defects and the Environment,” which included a report card on state birth-defects surveillance programs. More than 80 national and state health and environmental organizations referred to “Healthy from the Start” on their Web sites and in publications. In addition, the Congressional Prevention Coalition distributed the report to members of the U.S. House and Senate, along with a “Dear Colleague” letter, in an effort to raise congressional awareness of public health trends and the need for nationwide tracking of birth defects.

    Prompted in part by the commission's final recommendations, the Centers for Disease Control and Prevention issued an implementation plan for nationwide disease tracking, and Congress, in turn, appropriated funds to begin implementing the CDC's plan. Furthermore, 17 states passed legislation for increased funding to track birth defects.

  • Improved performance. Between the first and second iterations of the Government Performance Project's report card, half of the states improved their grades. While it is not possible to determine whether these improvements can be attributed directly to the project, many state agencies reported using the report card's criteria, which they view as credible and relevant, to assess their own progress. In an evaluation that the Trusts commissioned in 2004, 55 percent of state officials interviewed credited the project with motivating their states to take an interest in improving government.

    Lessons Learned

    While the responsibility of learning from a grade may belong to the evaluated organization, the Trusts have learned over the years some factors critical to a report card's success:

  • Rigorous methodology. The importance of a meticulous methodology that is scrutinized by experts in the field cannot be overstated. An “airtight” methodology ensures that the results are valid and makes it more difficult for target audiences, including organizations being graded, to critique the results based on methodological concerns.

    In developing a report card, considerations may include: working with an independent, nonpartisan organization to design the report card; providing for input during the development phase from subjects being graded so that they understand the methodology and can voice concerns early in the process; using reliable and valid quantitative measures whenever possible so as to avoid subjective judgments; and conducting a pilot test of the data collection so that challenges may be addressed before rolling out the larger effort.

  • Realistic time frame. A year may be needed to develop the indicators and methodology—a process that may include vetting the approach among well-respected researchers in the field—and an additional year may be required for data collection.

    It goes without saying that an assessment of a single dimension, such as the Pew Environmental Health Commission's measure of states' tracking systems for birth defects, takes less time to develop than report cards looking across multiple dimensions, like those of the Government Performance Project and Measuring Up.

    A second iteration of a report card, building on the methodology in place, can usually be issued more quickly.

  • Effective communications. Report cards are more likely to produce their desired impact if they include a well-developed communications strategy that reaches the project's primary audience.

    A communications strategy that is an integral component of the project, rather than an add-on, provides communications staff with a deep understanding of the project and enables them to produce materials most likely to be effective with targeted audiences—for instance, press releases, editorials and op-eds that explore complex issues raised by report cards; versions of the report card customized for particular audiences (e.g., national or state); and targeted packaging (e.g., full descriptions of the methodology for researchers and practitioners and a streamlined version for journalists, advocates, policy makers and the general public).

  • Complementary outreach. Although many report cards seek change, release and broad dissemination of a report card will itself rarely bring about the desired change. A low-graded organization often benefits from a fuller understanding of both what went into the grade and how high-graded organizations achieved their success.

    When the Trusts' board renewed the Government Performance Project earlier this year, the project's outreach efforts and tailored assistance to states were expanded. Activities will include meetings of officials across states and a network of peer advisors from high-graded states who will visit other states to offer information and guidance.

    Similarly, to supplement the 2002 iteration of Measuring Up, the 50-state higher education report card, the Trusts supported the National Collaborative for Postsecondary Education, a joint project of the National Center for Public Policy and Higher Education (which publishes the report card), the National Center for Higher Education Management Systems and the Education Commission of the States. The collaborative worked intensively in Louisiana, Rhode Island, Virginia, Washington and West Virginia, developing and presenting policy options that addressed each state's unique concerns and providing policy-implementation support.

    Conclusion

    Report cards can be valuable tools for eliciting change, but as with other grantmaking tactics, their effectiveness is not a foregone conclusion. Because report cards can be used for a variety of purposes, program staff interested in this tool should begin with a clear understanding of what they hope it will accomplish.

    Once a report card project is undertaken, attention must be dedicated to ensuring, through a sound methodology and unassailable data, that the grades are viewed as credible. Additional investments in communications and outreach will further contribute to the likelihood that a report card affects the graded entities much in the same way report cards affected us as students by drawing attention to important issues, highlighting best practices and promoting improved performance.

    Nichole Rowles is an officer in Planning and Evaluation at the Trusts.