Wednesday, August 9, 2017

Federal Court Finds Texas Teacher Evaluation System Is a "House of Cards," Issuing Ruling That Helps It Fall

The federal district court in Houston Federation of Teachers v. Houston Independent School District handed the “war on teachers” a huge loss this summer, acknowledging the major flaws in the district’s teacher evaluation system.  Similar to many other states, Texas operates a Value Added Teacher Assessment system.  Under Houston’s implementation policy:

student growth will whenever possible be calculated by a value-added statistical model called the Educational Value–Added Assessment System (EVAAS), developed by private software company SAS and licensed for use by [the district]. The EVAAS system measures teacher effectiveness by attempting to track the teacher's impact on student test scores over time. The details are more complicated, but in general a teacher's EVAAS score is based on comparing the average test score growth of students taught by the teacher compared to the statewide average for students in that grade or course. The raw EVAAS score is generated by SAS's proprietary software and is then converted to a test statistic referred to as the “Teacher Gain Index” (TGI), based on the ratio of the EVAAS score to its standard error. The TGI is sorted into one of five “value-added” effectiveness ratings.

The district then uses those ratings to make employment decisions for teachers, including termination.

Some may recall that lawsuit grabbing headlines when it was first filed.  Of particular note was that the district had recognized one of its teachers as award-winning just one year prior to ranking him as low-performing based on his student growth percentile model.

As I detail in The Constitutional Challenge to Teacher Tenure, 104 Cal. L. Rev. 75 (2016), these value added systems, along with their close cousins (student growth percentile models), are riddled with several fundamental flaws: tests that do not match the curriculum, failing to account for demographic variables, instability in ratings across years, arbitrary cut-off scores in the effectiveness ratings, and conflating correlation with causation.

All of these substantive problems in the systems translate into serious constitutional concerns, most notably procedural due process.  The constitution entitles teachers to notice and an opportunity to respond when their jobs are placed in jeopardy.  Yet, these systems do not provide any notice of a particular problem with a teacher’s instruction and, thus, they are in no position to know who to respond in terms of improving their teaching or refuting the statistical evaluation.  Classic examples of due process violations.

One of the biggest jokes was in Florida, where some teachers are rated on the test scores students receive in other classes.  To be crystal clear, their evaluation score is based on how students perform in someone else’s class.

Reluctant to stand in the way of reforms sweeping the nation and mandated by the federal government, the Eleventh Circuit Court of Appeals was willing to paper over the problems and reason that Florida’s attempt to improve teaching overall was sufficient to justify the program.  (I debunk the outcome in that case here.)

The federal district court in Texas made no such excuses for the state's teacher evaluation system, concluding that “cost considerations trump accuracy in teacher evaluation.”  In other words, the district new the system was flawed, but did not want to invest the resources to improve it.  As a result, the entire state system was a “house-of-cards.”

[T]he wrong score of a single teacher could alter the scores of every other teacher in the district. This interconnectivity means that the accuracy of one score hinges upon the accuracy of all. Thus, without access to data supporting all teacher scores, any teacher facing discharge for a low value-added score will necessarily be unable to verify that her own score is error-free.

. . .

The EVAAS score might be erroneously calculated for any number of reasons, ranging from data-entry mistakes to glitches in the computer code itself. Algorithms are human creations, and subject to error like any other human endeavor. HISD has acknowledged that mistakes can occur in calculating a teacher's EVAAS score; moreover, even when a mistake is found in a particular teacher's score, it will not be promptly corrected. As HISD candidly explained in response to a frequently asked question, “Why can't my value-added analysis be recalculated?”

Once completed, any re-analysis can only occur at the system level. What this means is that if we change information for one teacher, we would have to re-run the analysis for the entire district, which has two effects: one, this would be very costly for the district, as the analysis itself would have to be paid for again; and two, this re-analysis has the potential to change all other teachers' reports.

For these reasons, the court denied the district's motion for summary judgment.  

 

http://lawprofessors.typepad.com/education_law/2017/08/federal-court-finds-texas-teacher-evaluation-system-is-a-house-of-cards-issuing-ruling-that-helps-it.html

Teachers | Permalink

Comments

Post a comment