Sunday, May 19, 2013

Are Student Evaluations of Faculty Reliable?

No, says Dean Nora Demleitner: "At this point, evaluations of teaching tend to be largely restricted to student and peer assessments. Both, as we know, are easily manipulable. Therefore, faculty achievements and successes are difficult to measure and compare."  (here at 608)

Demleitner cites to a comprehensive article on law school evaluations:

Daniel E. Ho & Timothy H. Shapiro, Evaluating Course Evaluations: An Empirical Analysis of a Quasi-Experiment at the Stanford Law School, 2000-2007, 58 J. LEGAL EDUC. 388 (2008).

Some excerpts:

"Despite widespread use, consensus on their validity remains elusive, with scholars highlighting interpretation difficulties, noncorrespondence between evaluations and student performance, and lack of comparability, validity, or reliability."

"We document dramatic effects of wording and timing changes on evaluations, well-known in the survey literature. Although superficially similar, subtle differences in question wording may systematically affect evaluations, threatening comparability of evaluations across institutions or time."

"At Stanford, the new evaluations shifted both the mean and variance on the 5-point rating scale. The inadvertent result can be dramatic when considering a "cutoff" rule: for the same course, an instructor has a 35% probability of falling below 4.5 using the old evaluations, but this probability jumps to 59% with the new evaluations."

"[W]e demonstrate a primary pitfall to adopting an online response system. While online evaluations have many potential advantages, including increased survey flexibility, shorter response times, and reduced costs, they suffer from generally high and nonrandom nonresponse, threatening the validity of any summary results. Our analysis confirms such bias in the law school context."

"Our examination reveals considerable temporal trends in survey responses, suggesting nonuniform nonresponse bias across terms and instructors, and a strong upward trend in mean evaluations. We demonstrate how to account for such trends to consistently and effectively learn from evaluations."

(Scott Fruehwald)

| Permalink


Post a comment