Adjunct Law Prof Blog

Editor: Mitchell H. Rubinstein
New York Law School

A Member of the Law Professor Blogs Network

Tuesday, February 28, 2012

The Problem With Student Evaluations

Adjunct Professor Tim Edwards, University of Wisc. Law School sent in an excellent commentary on student evaluations which is applicable to full-time faculty. I could not agree more with the below statements. It is a bit long, but stay with it as it is well worth it:

____________________ 

Student Evaluations

Timothy Edwards

Axley Brynelson, LLP 

I write to share my thoughts about the use of student evaluations to evaluate instructor performance at my Law School.  I have taught here, as an adjunct, for over ten years.  During that time, I have taught Legal Writing, Advanced Legal Writing, Civil Procedure I, Civil Procedure II, Pre-Trial Advocacy and Professional Responsibility.  The purpose of this document is to inspire discussion, not to offend.  

As an adjunct, I am removed from the day to day discussions within the Law School, including those pertaining to student evaluations.  When I started, I was not provided with any training.  I received no feedback regarding my teaching from any of the Faculty Members at the Law School.  I often invited members of the faculty to sit in and evaluate my teaching, but it never happened.  From what I understand, this is common in most law schools that rely on adjuncts, both to teach and to keep institutional budgets in check. I am not suggesting that this approach is wrong, only that it has consequences.     

Absent such an evaluative process, the only feedback that I have received comes from student evaluations.  Most of the time my evaluations are quite good.  More recently (and for reasons that I will explain), my evaluations have suffered, due in some measure to my own actions.  Unfortunately, it appears that these evaluations are the only tool that the Law School relies on in measuring the performance of its adjunct lecturers.  To the extent another metric is being used, I have not been told about it, nor have I seen it in my classroom. 

My thesis, which is not wildly unique, is very simple:  Absent some corroborating tool to evaluate instructor performance, student evaluations are an inherently unreliable and misleading source of information for purposes of measuring the effectiveness of an instructor.  While student evaluations can provide objective information (i.e., whether the instructor is on time, intoxicated, treats the students appropriately or appears to be organized), law students are not equipped to objectively evaluate the value of their own learning experience, or the skills of the instructor, when they complete their evaluations.  Their evaluations should not be used for this purpose.  

From what I understand, one central objective for the Law School is for its instructors to teach the students how to analyze legal problems and prepare them to practice law.  I believe that this requires, among other things, instruction regarding analytical and practical skills that the students will actually use when they become lawyers.  This emphasis has been confirmed by recent studies, and consistent commentary, which criticizes the significant gap between theory and practice that pervades our law schools.  I have observed this gap, and its impact on young lawyers, who are often unprepared for the practice of law when they graduate.  Many students who graduate from the UW Law School do not even know how to cite a case or prepare a basic pleading (as I teach pre-trial advocacy, the blame for some of this should rest squarely on my shoulders).  We have seen this over and over at our Firm, to the point that some of my partners are reluctant to hire from law schools that do not have a comprehensive legal writing program.     

As an adjunct who litigates, full time, in his “real life,” one of my primary goals is to impart some practical knowledge/skills to my students.  Students need to understand that the law, as written, is often applied much differently.  Students need to understand (and acclimate to the fact) that the practice of law is demanding and, in many ways, unforgiving.  Problems do not have easy answers, and they don’t always have “right” answers.  Deadlines become critically important, as is timing.  Confusion is common, as clients, judges, senior partners and opposing counsel often make it difficult to solve problems involving competing interests and effectively represent a client.  This is a very difficult job with tough challenges that cannot always be resolved by reading a book or looking up a statute.  The students need to know what they are signing up for, and to the extent possible, they should be prepared to follow through.  Of course, this should be done at the appropriate time in their education.  

Some basic thoughts: 

  • A law school student (especially in her first year) typically has a very narrow set of objectives.  Generally speaking, she wants to get a good grade.  She wants to know what will be on the test, or what I am looking for in a given writing assignment.  She wants to figure out the easiest possible way to get a good grade by doing well on that task, and she wants immediate, detailed feedback on any work she does because she is scared.  As a general matter, these students believe that grades are everything, and they are rarely interested in whether they are learning how to be a good lawyer unless it helps them get a better grade.  In the meantime, they resist confusion, perceived inconsistency or anything else that detracts them from the most efficient path to a good grade.  While this description is somewhat magnified it is, for the most part, accurate.  The pressure to perform well and secure a good grade defines their objectives in many critical ways. 
  • As a law school instructor, my objectives are much different.  While I want everyone to succeed, I am less concerned about whether the students are confused or struggling to address a problem.  I tell them how litigation works.  We apply the rules to different situations and I often ask them questions that do not have an easy answer—questions that require the application of judgment, not just knowledge.  I require the students to meet deadlines, and I require them to rewrite assignments that are done poorly.  I don’t accept a lot of excuses and I expect a lot from them.  At the risk of being truly unpopular, I now ban laptops unless used for note-taking purposes.  In addition, I no longer buy them pizza.     
  • I also focus on problem solving.  Setting aside the first few weeks, I do not “spoonfeed” information from the book or hold the students’ hands through every single issue in the reading material.  As a result of this, the students become frustrated, but their learning experience is much different.  It seems likely that my evaluations dropped because I am doing a better job of teaching and the students are, in fact, learning more.  In any case, the evaluations tell me nothing about whether I actually did my job. 
  • In years past, I have often received very favorable evaluations.  In every single one of those situations, I tried to align my teaching style with the students perceived expectations and needs.  I “taught to the test” (or in legal writing, spoonfed what I expected on the writing assignment) and did everything I could to placate their needs and expectations (a “consumer” model, if you will).  In retrospect, I view this approach as ineffective, and I view the evaluations as somewhat useless because they appear to reflect the student’s comfort level more than anything else. 
  • Last spring, I taught evidence.  Unfortunately, my work commitments distracted me from the class, and I was frequently absent.  The evaluations were low, and deservedly so.  The students complained about the absences and the resultant disorganization.  This is a perfect example of how student evaluations can be used, in limited instances, to identify objectively verifiable problems with instructor performance.  I deserved the criticism.    

This should not be a popularity contest.  Moreover, the Law School should not rely on student evaluations to determine whether the students are learning basic analytical and practical skills.  While students may have general, verifiable information to share, they are not presently qualified to assess our teaching skills, or for that matter, whether they actually learned anything in our classrooms.  I am not basing this conclusion on a fancy empirical assessment of student evaluations but, rather, common sense, years of teaching experience, and many years of reviewing inconsistent and misguided student evaluations that have done little to assist me as I search for new and more effective ways to teach. 

In addition to the fact that student evaluations cannot provide meaningful information regarding teaching skills or learning, they are also inherently unreliable.  Consider this by applying the Federal Rules of Evidence, which are designed, as a core value, to exclude unreliable information to prove a given assertion.  Setting aside the fact that evaluations may not be probative of teaching skills or learning, many are insulting, false and otherwise prejudicial.  More importantly, student evaluations constitute inadmissible hearsay whose unreliability is compounded by the fact that the out-of-court declarant is completely anonymous.  Finally, no court would ever consider such random aspersions from an unknown declarant as competent character evidence.  Understanding that this comparison is limited because the Law School is not a courtroom, the application of these rules does reinforce a basic point regarding the inherent unreliability of student evaluations.   They would never see the light of day in a courtroom.  

I am not pretending that I have all of the answers, and only write this short paper to make a simple point:  it is not fair or wise to judge adjuncts solely through student evaluations.  The Law School should put other measures in place (peer mentoring, etc.) and provide continued training to all of its adjuncts.  The Law School should not tolerate an environment where students can surf the internet in class (without reading the assigned material) and then anonymously criticize his instructor for not being “engaging” or “organized.”  To bridge the gap between theory and practice, students should be appropriately confronted with the realities of the practice of law, not placated when they are properly challenged.  While this may lead to lower evaluations, it will certainly lead to better lawyers.  

* * * * *

 

 

http://lawprofessors.typepad.com/adjunctprofs/2012/02/the-problem-with-student-evaluations.html

Adjunct Information in General, Law Professors, Law Schools | Permalink

Comments

Study: highly-rated professors are. . . overrated

How does a university rate the quality of a professor? In K-12 education, you have standardized tests, and those scores have never been more widely used in evaluating the value added by a teacher.

But there's no equivalent at the college level. College administrators tend to rely on student evaluations. If students say a professor is doing a good job, perhaps that's enough.

Or maybe not. A new study reaches the opposite conclusion: professors who rate highly among students tend to teach students less. Professors who teach students more tend to get bad ratings from their students -- who, presumably, would just as soon get high grades for minimal effort.

The study finds that professor rank, experience and stature are far more predictive of how much their students will learn. But those professors generally get bad ratings from students, who are effectively punishing their professors for attempting to push them toward deeper learning.

The study is called "Does Professor Quality Matter? Evidence from Random Assignment of Students to Professors." It was written by Scott E. Carrell of the University of California, Davis and National Bureau of Economic Research; and James E. West of the U.S. Air Force Academy

It uses as a laboratory the Air Force Academy, where students are randomly assigned to courses such as Calculus, each taught using an identical syllabus. All students are required to take specific follow-up courses. So, the researchers were able to study how each professor fared in producing results for his or her students, and how the same students did the next semester, and so on.

The findings are, to say the least, counterintuitive. Professors rated highly by their students tended to yield better results for students in their own classes, but the same students did worse in subsequent classes. The implication: highly rated professors actually taught students less, on average, than less popular profs.

Meanwhile, professors with higher academic rank, teaching experience and educational experience -- what you might call "input measures" for performance -- showed the reverse trend. Their students tended to do worse in that professor's course, but better in subsequent courses. Presumably, they were learning more.

That conclusion invites another: students are, in essence, rewarding professors who award higher grades by giving them high ratings, and punishing professors who attempt to teach material in more depth by rating them poorly.

"Since many U.S. colleges and universities use student evaluations as a measurement of teaching quality for academic promotion and tenure decisions," the authors write, "this latter finding draws into question the value and accuracy of this practice."

Posted by: Tim Edwards | Mar 2, 2012 9:01:29 AM

Post a comment