Tuesday, August 12, 2014
Nonprofit evaluation is a key component of establishing an efficient charitable market. Without a way to measure social impact, both nonprofits and donors remain unaware of whether investment is being put to its most productive use. (Social impact may be thought of as what the charity has accomplished with a donation). As Stephen Goldberg has noted in his book Billions of Drops in Millions of Buckets, one of the reasons inefficiency exists in the charitable market is because funders currently cannot differentiate between effective and ineffective charities. The Stanford Social Innovation Review recently examined the need for a shift in nonprofit evaluation and the discourse surrounding it in “Measuring Social Impact: Lost in Translation,” and the ideas expressed hold valuable insights for the sector.
Most importantly, the authors contend that nonprofits need to set the agenda in terms of evaluation and should use a qualitative approach in addition to a quantitative one. They point out that if nonprofits do not shape the evaluation conversation, funders will do it for them. They note five specific items that nonprofits should “talk more about” in terms of evaluation. First, nonprofits should focus more on their purpose and their strategy for achieving it. As the authors advise, “[A]ll nonprofits should have a clearly defined theory for how they will create change that connects their strategies and programs to the results that they anticipate.” Second, nonprofits should spend more time discussing people. Funders often want nonprofit assessment to include quantitative assessments, e.g., the number of people indirectly affected. However, too much emphasis on quantitative analysis reduces a nonprofit’s impact to a series of numbers. The authors promote a more balanced approach that includes qualitative assessments as well: “Qualitative assessments that draw on conversations with people are often more consistent with how nonprofits operate, and they are also a methodologically valid form of evaluation.” Third, nonprofits would benefit from drawing attention to the big picture. In other words, evaluation should consider how a given nonprofit’s work fits within the collective transformation of an area. Fourth, nonprofits should not shy away from discussing their challenges. Their failures and lessons learned are beneficial in terms of collective learning. Accordingly, the authors urge nonprofits to highlight not only monitoring but also transparency as a goal in evaluation. Finally, nonprofits should encourage more learning. Currently, funders (who focus more on monitoring than learning) have a much louder voice in evaluation than beneficiaries and nonprofit workers who are directly involved and who may facilitate learning.
In terms of the discourse surrounding nonprofit evaluation, the authors caution that business, managerial, and scientific language is drowning out the nonprofit voice. This underscores the need for nonprofits to take charge of shaping evaluation. Too often terms such as “investment,” “returns,” “output,” and “outcomes” are used to discuss social impact, without regard to the five other areas identified. The Stanford team’s study of 400 individuals and organizations in the nonprofit sector revealed that the vocabulary of nonprofit evaluation typically falls within 3 cultural domains: (1) managerial, (2) scientific, and (3) associational, with managerial terms dominating the discourse. All of these domains hold valuable insights for the nonprofit sector; however, nonprofits themselves should be the ones to shape their evaluation and the discourse surrounding it.