Craig Waterman, ESE’s expert on measuring student growth, will hold Office Hours at CES in Northampton from 10am to 12pm on Wednesday, May 6. He’ll share information about the new flexibility for determining Student Impact Ratings, and answer all kinds of questions related to District-Determined Measures and Educator Evaluation.
Craig’s 2014 Office Hours were a huge success, giving ESE a human face and giving educators from all over Western Mass invaluable information. Please join us!
I asked a group of curriculum directors what is most challenging about measuring student growth, and one responded, “We don’t know what aspects of DDMs we should be addressing.”
I infer from this comment that, while districts have invested the time and effort to implement DDMs, they have not found the time to reflect on the impact of the process.
To support the reflection process, we’re producing a brief presentation that walks people through the DDM process and the opportunities for training and growth at each juncture. For example, after the first administration, teachers who score student work together can compare definitions of what it takes to meet the standard (curriculum, assessment). While analyzing the results of this first administration, teachers may identify areas of weakness and revise instructional strategies to provide the support students need (instruction).
It’s our hope that this presentation will help district teams identify which aspects of measuring student growth to address– to improve teacher practice and student growth.
Here’s a terrific, brief video about the value of using student feedback– or surveys– as another tool for improving instruction.
It’s only mid-year, and middle school music teacher Ben (not his real name) learns that three-quarters of his students have surpassed his year-end DDM goals for them. In just half a year they have made the progress with sight-reading he thought would take the whole year. This is the kind of improvement every educator and evaluator wants to see, and is the result of focus on curriculum and effective instruction.
The DDM process has led to improved instruction and student outcomes in certain classrooms. But DDMs aren’t the only aspect of the educator evaluation system designed to make a difference in teaching and learning. The system provides many mechanisms that can move the proverbial needle, including:
- Goal-setting and tracking
- Classroom observations and related conversations
- Gathering artifacts
- Measuring student growth
- Student feedback (surveys)
Question: Which one of these components is moving the needle in your district?
As teachers begin gathering data from performance assessments and pre-tests that are components of growth measures (DDMs), district leaders are asking timely questions about managing the new data.
It’s a challenge because the data come in many shapes and sizes: from rubric-scored performances to tallies of laps run around the gym to zero-to-one hundred test scores.
Craig Waterman, ESE project lead of growth measures, suggests narrowing what districts collect to the percentage of students with high, moderate, or low growth in any given class. Organizing the data in this way would allow educators to look for answers to questions like:
- Which teachers contributed to particularly high student growth?
- Which grade level or department contributed to high student growth?
- Which instructional practices in the high growth classrooms and grade levels ought to be more widely shared?
How is your district planning to manage DDM data? How do you intend to use the data?