Monday, December 22, 2008
The Wall Street Journal has two interesting pieces (one here and one here) on the efforts to rate the effectiveness of charities. Charity Navigator is probably the best known of these efforts, but as the first of the WSJ articles points out, charity ratings can vary widely by the methodology used. And like the U.S. News rankings of law schools, the mere existence of the rankings cause some charities to behave in ways that are targeted solely at improving their ranking under the criteria used, rather than targeted at their mission. The story also reports that just like law schools and U.S. News, some charities apparently engage in
"accounting chicanery aimed at achieving a higher ranking. For example, groups report lower fund-raising costs, and lower costs per dollar raised, by reassigning fund-raising costs to program expenses. This can be accomplished by including some sort of advocacy message in fund-raising materials: A veterans group sending out fund-raising letters might encourage the flying of American flags in the same missive."
Frankly, I don't know what to think of rating systems. My own view is that the U.S. News ranking of law schools has done far more harm than good; aside from the wacky incentives created by the rankings, they often make consumers (potential law students) lazy: instead of researching and thinking through the law school application and selection process, too many of the students I meet today base their selection on a single number in U.S. News. The WSJ articles on charity rankings indicate that we're not close to that situation -- yet -- with donors and charities, and certainly some of the information is extremely useful (the overall ratio of administrative overhead to program expenditures, for example). But I wonder . . .