This oft-cited New York Times article points out the small impact that new educator evaluation systems have made on teacher effectiveness ratings:
In Florida, 97 percent of teachers were deemed effective or highly effective in the most recent evaluations. In Tennessee, 98 percent of teachers were judged to be “at expectations.”
If new evaluation systems do not find a wide distribution of effectiveness, districts are unable to identify the teachers needing support or the educators who should not be working with children at all.
One explanation given for the high percentage of teachers rated effective in these states is is that “principals . . . can be loath to give teachers low marks.” If we are going to improve instruction here in Massachusetts, we’ve got to move beyond this culture of nice, and provide teachers with honest, accurate appraisals of the quality of their instruction.
The first step in doing so is aided by the specific nature of the descriptors in Massachusetts’ teacher performance rubric. (Massachusetts Teacher Performance Rubric) Even here, though, there is room for interpretation, and districts benefit from learning walk-throughs, or instructional rounds, that lead groups of teachers and administrators to reach consensus about the meaning of certain terms in the rubric, like ‘rigor.’ CES leads work in this area through its Advanced Training for Administrators.