Trust Magazine

The History of Evaluation at Pew

How Pew uses evaluation to inform and improve its work

En este número:

  • Fall 2020
  • Coping With the Pandemic
  • A Look at Views on Gender Equality
  • 3 Ways to Combat Addiction
  • A Huge Boost for National Parks
  • News on Social Media
  • Confronting Ocean Plastic Pollution
  • Telehealth Helps Opioid Use Disorder
  • Foodborne Pathogens a Serious Threat
  • In Memoriam: Arthur Edmund Pew III
  • Gathering the Evidence, Making the Case
  • Noteworthy
  • Pandemic Threatens Black Middle-Class Gains
  • Partners for a Sea Change
  • Boost Chile’s COVID-19 Testing
  • Return on Investment
  • The History of Evaluation at Pew
  • View All Other Issues
The History of Evaluation at Pew
The Pew Charitable Trusts

The Pew Charitable Trusts commissioned its first evaluation, hiring external experts to examine its work to highlight successes and failures, in 1985. Since then, evaluation has been an integral part of Pew’s approach to philanthropy, helping the organization understand its progress and make decisions about the future direction of its work based on sound, independent analysis. A look at evaluation at Pew shows how it has evolved over time.

The early days

After decades of work using a largely anonymous approach to grantmaking, Pew began to take a more active role in identifying and partnering with grantees to develop programs. By the 1990s Pew had embraced thinking referred to as “strategic philanthropy,” which sought to derive greater benefit from every investment of capital, time, and talent, similar to the way that venture capitalists view the world. Except in Pew’s case, the return on investment was measured not in profits, but in long-lasting, positive, and powerful benefits to the public.

But how could philanthropic organizations like Pew best measure the public benefit resulting from their investments? Program evaluation, which emerged in the 1960s to assess the effects of publicly funded social programs, held promise for organizations like Pew. Pew began commissioning evaluations—carried out by independent consultants and overseen by internal staff—to examine the performance of individual grants and the collective results of groups of projects.

By 2000, evaluation had become recognized as a valuable tool at Pew, providing information that measured success and informed decisions about a program’s direction. Leaders at Pew, and at other philanthropies in the U.S., began to identify ways to better link evaluation with their future planning. Evaluators noted that programs with clear plans of action, informed by systematic analysis, were more likely to see measurable benefits. As a result, evaluation staff were increasingly called upon to provide objective input as strategies were being developed and to help staff articulate clear and measurable goals. The strong link between evaluation and planning remains critical to Pew’s work today.

Shift to public charity

After 2003, when Pew changed its status from a private foundation to a public charity, the evaluation unit made additional adjustments to create an approach for work that Pew was directly operating, rather than initiatives run by grantees. As a result, evaluations had to broaden to include and assess Pew’s role in implementing program strategy. These types of reviews not only assess whether a program’s goals were achieved, but also examine Pew’s contributions to any progress observed. We also identified criteria to guide discussions about which initiatives would benefit from evaluation, such as prospects for broader learning; opportunities to inform institutional or program decisions; questions about progress from the board of directors, management, or a donor; investments that were substantial or highly visible; and program or project readiness for evaluation.

Over time, our team has implemented new kinds of evaluations. In addition to reviewing projects that have been at work for several years or more, we have increasingly overseen evaluations that aim to answer questions about the design, implementation, or early progress of a project, aiming to inform its ongoing management and interim decision-making. A recent example is our team’s 2019 assessment of Pew’s Evaluation Capacity Building Initiative (ECBI), which focuses on enabling Philadelphia-based health and human services organizations to gather and use data more effectively to strengthen their work. The ECBI evaluation, done during the initiative’s first two years, helped us understand how it was working and led to several improvements, including changes in the selection and orientation of participating grantees and more training and oversight of those working with grantees.

Evaluation Through the Years

1985: Pew’s first evaluation, of the Pew Health Policy Program, which provided health policy fellowships through the funding of four university programs, found that the fellowships had a positive influence on participants’ training and career paths. For instance, former fellows were filling leadership positions in health policy and reported that the program had substantial influence on the quality and nature of their job performances.

1998: An evaluation of Pew’s community development grantmaking in Philadelphia, which had invested $39 million through 71 grants since 1991, found that Pew’s support had helped strengthen the community development infrastructure and also revealed small improvements in indicators such as real estate prices and home mortgage approval rates in areas of the city. However, it also found that the strategy was unable to stimulate broad-scale change due to the depth and breadth of challenges in the city as well as grantees’ inability to access other sources of funding. Because of this, Pew scaled back its support for community development infrastructure and concluded the program in 2004.

2010: Pew evaluated its U.S. public lands strategy, which launched in 1999 and was initially funded through grants to partner organizations, but evolved to include projects directly operated by Pew. The evaluation found that Pew had made decisive contributions to new administrative and legislative wilderness protections and increased momentum for further reforms. The assessment also noted that Pew was able to effectively recalibrate its strategy in response to challenges in the external environment, and credited the talent and expertise of Pew staff as drivers of success.

Learning from evaluations

Pew has long been committed to learning from its evaluations. As Susan K. Urahn—then the director of planning and evaluation and now Pew’s president and CEO—explained in 1998, “Besides helping us understand how well we are doing, evaluation gives us the chance to learn from our work. … We have the obligation to learn from our efforts regardless of the outcome. There is no other way to get better at what
we do.”

In recent years, philanthropic and nonprofit organizations have increased attention on how evaluation staff could better help organizations apply lessons learned from evaluation activities to inform and improve their program strategies. A 2019 survey of evaluation departments at philanthropies found that the responsibility for organizational learning generally falls to evaluation staff, and nearly three-fourths of respondents said their departments place a high priority on supporting learning. This renewed attention on evaluation’s learning potential has offered insights and opportunities to improve Pew’s efforts to strengthen our initiatives.

Our team supports staff learning in a variety of ways. For example, we bring together program staff from projects that share similar characteristics to discuss common themes from across our evaluations. After a recent evaluation of our public sector retirement systems work, we brought together leaders from other Pew initiatives aimed at improving government performance to share applicable lessons. One, for instance, was the importance of impartial research and analysis as a tool for Pew to gain credibility and traction in its state work. Other approaches we use to support learning include consulting with staff to help them decide how they will monitor and learn from the implementation of their work, and developing summaries of insights from past evaluations and external literature to inform practices within Pew. Our team has also helped Pew’s research initiatives learn about effective ways to strengthen connections between research and policy by sharing findings from prior evaluations with staff and by facilitating discussions about how to apply these findings to their work. We also brought in external experts and supported staff conversations about how to track and measure the progress of research. When possible, we also share relevant findings externally, with Pew’s partners and the philanthropy field.

Looking ahead

As Pew continues to evolve, so too will the role of evaluation. For instance, as Pew continues to address challenges that disproportionately affect people of color, including incarceration, juvenile justice, household debt, and access to credit, our unit is examining how our evaluations might be best positioned to support this important work. This means considering, for example, the diversity of our evaluation teams, the cultural appropriateness of our methods, the ability of evaluations to answer questions about the effect of a strategy on historic drivers of inequity, and, when relevant, the inclusion and participation of communities affected by our work.

As Urahn stated some 20 years ago, we have an obligation to learn from our work and adapt based on what we find. The evaluation unit at Pew is committed to helping the institution learn and adapt, and by doing so we contribute to the long-lasting, positive, and powerful benefits to the public that Pew seeks to achieve.

Nicole Trentacoste is The Pew Charitable Trusts’ director of evaluation and program learning. 

Return on Investment Partners for a Sea Change
Trust Magazine

Informing Public Debate

Quick View
Trust Magazine

Pew’s evaluation team commissions external experts to examine our past work, seeking not only to highlight successes and failures, but to provide lessons that can inform the institution’s ongoing and future projects.

Fact Sheet

What Drives Program Evaluation Costs?

Quick View
Fact Sheet

Evaluations are a powerful tool that you, as a policymaker, can use when determining how to allocate limited public resources toward effective programs. Though many types of evaluations exist, this fact sheet focuses on impact evaluations, which allow you to know whether a program achieved its intended effects, such as an increase in employment or a decrease in crime.