Wednesday, July 17, 2013

Charter School Study of Student Achievement Draws Criticism from All Sides



CredoTwo weeks ago, I posted on the Stanford Center for Research on Education Outcome's (CREDO) new charter school study, which  indicated that, on the whole, charter schools have shown improvement since 2009.  The prior 2009 CREDO study, in contrast, had reached less than flattering findings regarding charters and had been a key source of evidence for charter opponents. I point this out because it meant that the new, marginally positive results were not coming from a charter school "cheer leader."  On that basis, I gave the new findings special attention and the benefit of the doubt.  

Those far more expert than myself in statistical methods, however, have dug into the report and begun to raise serious questions.  In fact, the report is now drawing criticisms from all sides. Some charter school advocates will still charge that the report does not give charters enough credit and understates the gains they are making.  In other word, the report may be positive news for charters, but not positive enough.  Others charter advocates take a slightly different route and wildly exaggerate the study's findings.  The National Alliance for Public Charter Schools posted this news blurb:

Stanford University Study Finds Public Charters Better Serve Disadvantaged Student Populations
A study released by the Center for Research on Education Outcomes (CREDO) at Stanford University found that public charter school students in 27 states are outperforming their traditional public school peers in reading while making significant gains in math.

Sorry, but the study does not exactly say that.  It says charter schools in these states have shown more gain than traditional public schools, but charters were starting from a lower point.  They have not, however, surpassed traditional public schools in achievement.  The new CREDO study finds that, on the the whole, only 25% of charters outperform public schools in reading and only 29% outperform public schools in math.

One leading charter school proponent is neither overstating or applauding the report. Instead, it  is calling the study into question in a way that undermines the entire study and deprives charters of any positive spin they might put on it.  Jeanne Allen, director of the pro-charter Center for Education Reform, says that "[t]he way that CREDO has manipulated data and made conclusions about policy based on that data is absolutely 'uncredible.' "  A news release on the Center's website adds: 

The new CREDO report, an update of one previously issued in June 2009, is again extremely weak in its methodology and alarming in its conclusions. . . No matter how well-intentioned, the CREDO research is not charter school performance gospel . . . Similar to its failed 2009 effort, this CREDO study is based on stacking mounds of state education department data into an analytical process that is decidedly lacking in rigor.

This criticism from inside the charter school community is causing significant internal dissension, as reported by NPR.  

The National Education Policy Center, a non-partisan academic research center at the University of Colorado, has also raised more pointed and serious questions that suggest the gains reported may not exist.  In a release from yesterday, Andrew Maul & Abby McClelland offered this overall review:

The study finds a small positive effect of being in a charter school on reading scores and no impact on math scores; it presents these results as showing a relative improvement in average charter school quality since CREDO’s 2009 study. However, there are significant reasons for caution in interpreting the results. Some concerns are technical: the statistical technique used to compare charter students with “virtual twins” in traditional public schools remains insufficiently justified, and may not adequately control for “selection effects” (i.e., families selecting a charter school may be very different from those who do not). The estimation of “growth” (expressed in “days of learning”) is also insufficiently justified, and the regression models fail to correct for two important violations of statistical assumptions. However, even setting aside all concerns with the analytic methods, the study overall shows that less than one hundredth of one percent of the variation in test performance is explainable by charter school enrollment. With a very large sample size, nearly any effect will be statistically significant, but in practical terms these effects are so small as to be regarded, without hyperbole, as trivial.

More specifically, they point out that the study threw out 15% of charter school students from the study because it could not produce a "virtual twin" match in the regular public schools.  These excluded students, however, had scores that were .43 standard deviations below other charter school students.  In other words, many of the weakests charter school students were not even counted.

Second, (if I understand it correctly) the study's statistical model compared individual students in charter schools to individual students in public schools.  Maul and McClelland seriously question this model, however, because it does not account for classroom variables.  For instance, what if the charter school classroom had a higher average soci0-economic status than the public school classroom?  If this were the case, any increased learning in the charter could easily be a result of the positive peer effects of the classroom demographics rather than the charter school's instructional method or structure.

Third, they point out that the CREDO study's "virtual twin" methodology does not account for error rate in students' standardized test scores.  In other words, students with the same standardized test scores are not always similarly situated and, thus, statistical modeling is necessary to adjust for that.  CREDO did not.  Maul and McClelland's full review is available on the National Education Policy Center here.

Reports of this scale and importance will always generate criticism, but these criticisms seem to strike hard at the core of the report.  If these criticisms are valid, one must wonder why CREDO made these leaps.  Did it feel compelled to reach more favorable findings than in 2009?  If so, why? Or was this just poor research design?  Either way, this new study may be destined to live under a cloud of doubt, rather than become a definitive study like its 2009 counterpart.

    --db

http://lawprofessors.typepad.com/education_law/2013/07/charter-school-study-of-student-achievement-draws-criticism-from-all-sides.html

Charters and Vouchers, News, Studies and Reports | Permalink

Comments

Post a comment