Here’s a terrific, brief video about the value of using student feedback– or surveys– as another tool for improving instruction.
It’s only mid-year, and middle school music teacher Ben (not his real name) learns that three-quarters of his students have surpassed his year-end DDM goals for them. In just half a year they have made the progress with sight-reading he thought would take the whole year. This is the kind of improvement every educator and evaluator wants to see, and is the result of focus on curriculum and effective instruction.
The DDM process has led to improved instruction and student outcomes in certain classrooms. But DDMs aren’t the only aspect of the educator evaluation system designed to make a difference in teaching and learning. The system provides many mechanisms that can move the proverbial needle, including:
- Goal-setting and tracking
- Classroom observations and related conversations
- Gathering artifacts
- Measuring student growth
- Student feedback (surveys)
Question: Which one of these components is moving the needle in your district?
As teachers begin gathering data from performance assessments and pre-tests that are components of growth measures (DDMs), district leaders are asking timely questions about managing the new data.
It’s a challenge because the data come in many shapes and sizes: from rubric-scored performances to tallies of laps run around the gym to zero-to-one hundred test scores.
Craig Waterman, ESE project lead of growth measures, suggests narrowing what districts collect to the percentage of students with high, moderate, or low growth in any given class. Organizing the data in this way would allow educators to look for answers to questions like:
- Which teachers contributed to particularly high student growth?
- Which grade level or department contributed to high student growth?
- Which instructional practices in the high growth classrooms and grade levels ought to be more widely shared?
How is your district planning to manage DDM data? How do you intend to use the data?
Looking for creative and inspiring measures of growth in art and design? Check-out the Burlington Public Schools website.
Included are DDMs for every grade level as well as tools that measure Creativity, Habits of Mind, and Elements of Art. The rubric in the Creativity DDM measures skills like originality, risk-taking, and flexibility. It’s well worth a look! (Thanks to Gwynne Morrissey for locating this treasure!)
Do you have any resources to recommend?
The similarities between student learning goals and measuring student growth– DDMs– have led to confusion in some districts– and an opportunity for streamlining in others.
Before exploring the pros and cons of using the same set of data to assess progress toward student learning goals and student growth, it’s important to review the place of both in the educator evaluation system. A teacher’s progress with student learning goals contributes to Performance Rating (Exemplary, Proficient, Needs Improvement, Unsatisfactory), while measures of student growth are used to arrive at Impact Rating (High, Moderate, or Low).
Pros: By collecting data sets that track progress on student learning goals and measure student growth (DDMs), teachers should be able to decrease the time allocated to assessment and the time needed to score student work.
Cons: The objectives of the student learning goal may be very different than those of the DDM. The first may be short-term, whereas a DDM ought to be long-term, usually a school year or semester. And the grain-size may be different; the student learning goal can address a more narrow set of learning a standards than a DDM. Finally, there is the possibility that one set of data might ‘count against’ a teacher twice. If that single set of data is disappointing, it can drag down a teacher’s performance and impact rating.
What do you think? How is your district approaching this issue?