Tuesday, April 15, 2014
Last week, the American Statistical Society released a report on "Value Added Models" that attempt to assess the effectiveness of teachers. The report would appear to be a word of caution to current policies that rely heavily on students' standardized test scores to evaluate teachers. Rather than misstate the report, I offer its own bullet point summary:
The ASA endorses wise use of data, statistical models, and designed experiments for
improving the quality of education.
• VAMs are complex statistical models, and high-level statistical expertise is needed to
develop the models and interpret their results.
• Estimates from VAMs should always be accompanied by measures of precision and a
discussion of the assumptions and possible limitations of the model. These limitations are
particularly relevant if VAMs are used for high-stakes purposes.
o VAMs are generally based on standardized test scores, and do not directly measure
potential teacher contributions toward other student outcomes.
o VAMs typically measure correlation, not causation: Effects – positive or negative –
attributed to a teacher may actually be caused by other factors that are not captured in
o Under some conditions, VAM scores and rankings can change substantially when a
different model or test is used, and a thorough analysis should be undertaken to
evaluate the sensitivity of estimates to different models.
• VAMs should be viewed within the context of quality improvement, which distinguishes
aspects of quality that can be attributed to the system from those that can be attributed to
individual teachers, teacher preparation programs, or schools. Most VAM studies find
that teachers account for about 1% to 14% of the variability in test scores, and that the
majority of opportunities for quality improvement are found in the system-level
conditions. Ranking teachers by their VAM scores can have unintended consequences
that reduce quality.