Tuesday, November 22, 2011
Even the authors of the study, Student Consensus on RateMyProfessors.com, don't suggest that the anonymous student comments reflect any indication of overall teacher effectiveness beyond those two, limited measures. And even then, it may only serve as a "rough" guide if the professor has accumulated more than 10 feedback ratings. As reported by the Chronicle of Higher Ed:
The Web site RateMyProfessors evokes skepticism among faculty members. Some view the anonymous evaluation site as a haven for rants and odd remarks ("He will crush you like an academic ninja!"), or a place where students go to grade instructors based on easiness or attractiveness (a chili-pepper icon distinguishes professors that are "hot" over those that are "not").
But new research out of the University of Wisconsin at Eau Claire suggests the popular service is a more useful barometer of instructor quality than you might think, at least in the aggregate. And the study, the latest of several indicating RateMyProfessors should not be dismissed, raises questions about how universities should deal with a site whose ratings have been factored into Forbes magazine's college rankings and apparently even into some universities' personnel evaluations.
. . . .
In their study, they probed the reliability of the site's ratings by focusing on the level of consensus among students for 366 instructors at their state university, each of whom had at least 10 evaluations.
The idea is that, if students rate professors based on idiosyncratic personal reactions—to a rude comment made in class, say—then it should take a lot of posts to reach a consensus. By contrast, if students are consistent in their ratings, then a consensus should emerge with a small number of evaluations.
. . . .
Ms. Bleske-Rechek found professors with 10 evaluations displayed "the same degree of consensus in their quality ratings" as those with 50.
"Degree of student consensus about an instructor occurs very early on, in terms of how many raters there are," she said in an interview. "This is similar to what you see on traditional student evaluations of instruction. In other words, it seems like students are homing in on the same experiences in the classroom, because otherwise they wouldn't be showing consensus."
That suggests faculty members with at least 10 ratings "may be able to extract crude judgments" of how students perceive their "clarity and helpfulness," Ms. Bleske-Rechek and Ms. Fritsch write in their paper.
. . . .
Ms. Bleske-Rechek isn't the first researcher to mine this territory. An earlier paper from New York's Marist College, "'He Will Crush You Like an Academic Ninja!': Exploring Teacher Ratings on RateMyProfessors.com," concluded that the site's evaluations "closely matched students' real-life concerns about the quality of instruction in the classroom."
The paper added, "While issues such as personality and appearance did enter into the postings, these were secondary motivators compared to more salient issues such as competence, knowledge, clarity, and helpfulness."
And another study, conducted by researchers at the University of Maine, found strong correlations between ratings on RateMyProfessors and formal in-class evaluations.
Other researchers have blasted RateMyProfessors, however. In a 2009 paper, Elizabeth Davison and Jammie Price, of Appalachian State University, in North Carolina, faulted the site's category system for fostering an "anti-intellectual tone that manifests itself in comments about instructors' personality, easiness of workload and entertainment value rather than knowledge attained."
"My biggest validity issue with the site is that Overall Score is being perceived as 'teaching effectiveness' and yet is only based on perceptions of helpfulness and clarity," Ms. Davison says in an e-mail. "I believe teaching effectiveness is more complex and should include more-robust measures such as how much did a student learn, preparedness of the instructor, or the challenging nature of the material."
You can continue reading here.