DESE’s collection of implementation briefs grows with the release of guidance for measuring the growth of English Language Learners (ELL Implementation Brief) and students with identified needs (Special Education Implementation Brief).
DDM
Waterman Gives DDMs a Human Face
Craig Waterman brought a great deal of expertise to Monday’s Office Hours as well as a healthy dose of humor and flexibility. Here are some impressions of the event contributed by Louise Law (Frontier Regional Schools), Gwynne Morrissey (CES), and Diana Roy (Ludlow Public Schools):
- DDMs are still a work in progress across the state and DESE wants to see what districts have by June. Districts will have flexibility to revise those DDMs as they see fit.
- Teachers shouldn’t be afraid of a few students having a “low” rating on a DDM. However, a teacher getting a low impact rating means that many students were making low growth across multiple measures across multiple years, which is, in fact, worth having a conversation about.
- There are creative ways to measure student growth – like using holistic growth rubrics.
- DDMs have to allow all students an equal chance for growth, which has particular impact on special education students and English Language Learners.
DDM Road Show to Visit Northampton
Craig Waterman, DESE’s expert on measuring student growth, will hold Office Hours at CES in Northampton from 2 to 4 on April 28. He’ll answer all your questions about District-Determined Measures. If you’re interested, please register by April 23 with Jill Robinson: jrobinson@collaborative.org.
How do Specialized Personnel Measure Student Growth?
Sophia, a speech and language pathologist, wrote seeking help creating a measure of student growth. I forwarded her DESE’s latest Implementation Brief SISP, designed to support specialized instruction support personnel in building DDMs.
When Sophia responded, she questioned how she could follow the Implementation Brief’s advice to conduct a peer review or administer checklists to measure the quality of collaboration. She writes, “When would [I] have the time to collect the data outside the workday? [I am] not given time in the schedule for this.” Note: Sophia has a caseload of 39.
As an alternative, I sent Sophia this plan to build a DDM based on assessments and data she is already collecting.
1) Assemble a group of students working on the same sort of challenge, say articulation. Use your most populous service area to create as large a cohort as possible.
2) Identify a screening tool that measures how these students are functioning (preferably one you are already using).
3) Select a time period (one school year, a half-year, or whatever best aligns with your work). Administer the screen at the beginning and end of that time period (as you are probably already doing).
4) Determine one year’s expected growth for a student as measured by this screen (you may already define this in the IEP). Students who achieve that expected growth show moderate growth, significantly less show low growth, and significantly greater show high growth.
Even though you already expect different amounts of growth from each of your students (based on the nature of their identified disability), track growth for each and label that growth low, mod, or high based upon your individual expectations for them. Essentially, you’d be taking a moment at the end of the year to step back from your work, look at a cohort of your students, and make a general statement about their rates of growth. Sticking with data you already gather helps avoid an undue extra burden.
Are DDMs a Potential Minefield?
Justin, a middle school math teacher, shared his concerns about DDMs with me this week. I asked why he is supporting his district’s purchase of an off-the-shelf product to measure student growth.
Me: Why not create a DDM to measure what you care about, like the math practice standards from the Common Core?
Justin: I worry about the consequences of creating a DDM without a high level of validity and reliability.
Me: If your students’ growth ends up being low due to a flawed measure, won’t you be able to explain that to your evaluator?
Justin: Yes, with my current evaluator– but what if that person goes elsewhere? How can I be sure that his replacement will be equally understanding and trustworthy? And, if I am given a low impact rating, might it become difficult to get a job in another district?
And therein lies the nature of the minefield. Yes, the results of a low impact rating are not huge– moving from a two-year self-directed educator plan to a one-year plan– but what hard-working, caring teacher wants to be unfairly tagged with a low rating? And the long range impact on a teacher’s ability to find a new job may be immense.
What’s to be done? In my next post I’ll propose some of the long range benefits to be gained from implementing DDMs.