Educator Evaluation Beacon

  • Home
  • District Determined Measures
  • Student Surveys
  • Contact Us
  • Educator Evaluation at CES

elements

The Second Key to Success: Providing Guidance

February 18, 2013 by Damon Douglas

In an earlier post I mentioned how important administrative guidance is to successful implementation of the new educator evaluation. This guidance comes in many forms– from limiting the number of elements educators and evaluators focus on to setting clear expectations about the submission of artifacts to a coordinated effort to draft team goals whenever possible.

Both Hadley and Amherst Public Schools encourage teachers to focus their efforts on a specific subset of elements within the performance rubric. Amherst selected ten elements (Amherst Priority Elements–Teachers), while Hadley identified nine (HadleyTeacher Rubric- 9 Elements v2).

As I point out in the Q & A section of this blog, the model system requires teachers to submit artifacts that demonstrate their progress in all sixteen indicators in the performance rubric. In order to streamline this process, administrators lead staff in identifying existing artifacts (such as model curriculum units) that address many indicators. Setting limits on the number and size of artifacts lightens work loads for teachers and principals while also increasing the odds that useful information will emerge from collaborative review of the artifacts.

Similarly, the creation of team goals for either the professional practice or the student learning goal can streamline the process for both the educator and evaluator. A Granby public school principal reports that this approach leads to greater buy-in to the goal by teachers and, one would think, a greater chance of success in achieving the goal as well.

Let me know what’s working in your district.

Filed Under: Evaluation, Examples Tagged With: artifacts, Educator evaluation system, elements, indicators, professional practice goal, student learning goal

Roadside Assistance: Answers to Your Questions

December 10, 2012 by Damon Douglas

Here we share answers to questions that have been raised during our Educator Evaluation System trainings. DESE’s project leads, Claire Abbott and Samantha Warburton, have been extremely responsive to our questions– they usually provide answers in 24 hours, and sometimes within an hour!

What Does Proficiency Look Like?

Q:  What does proficient performance “look like”?  What exactly would you expect a teacher to be doing?  To what extent DESE will define what things might “look like?”

A:  DESE will provide no additional guidance around using the rubrics to define proficiency. That work will have to be done between evaluator and educator.

Is there a Limit to the Percentage of Teachers Rated Exemplary?

Q:   Is there a percentage-based limit to the number of teachers that can be rated exemplary– any truth to this? (or needs improvement, for that matter)

A:  No—there is no percentage-based limit associated with any performance category.  The regulations place no numerical targets or requirements on the number of educators in each rating category.

Performance Ratings at the Mid-cycle Conference?

Q: A local superintendent asks, “If the mid-cycle conference between evaluator and educator is to be truly formative, no performance ratings should be conferred. Are evaluators required to arrive at ratings at the mid-cycle meeting?”
A: Your local superintendent asks an important question. Yes—the mid-cycle conference is “formative” and intended to be a point where evaluators and educators touch base, check progress on goals, and make any mid-course adjustments to a plan if necessary. That said, there’s an important technical distinction between the formative assessment and the formative evaluation that relates to ratings:
-A formative assessment occurs mid-way through the cycle for educators on plans that are one year or less in length. No ratings are required for a formative assessment.
-A formative evaluation occurs mid-way through the cycle for educators on 2-year plans, so presumably, this would take place in May or June. Ratings are required for a formative evaluation, but they default to the educator’s prior Summative Rating unless there is evidence suggesting a significant change in practice by the educator (in which case an evaluator could actually issue new performance ratings and change the educator’s plan). The default rating is designed to alleviate the burden on evaluators from having derive a rating for every educator on a yearly basis. (The reason behind the rating requirement for formative evaluations is that in order to meet the federal RTTT parameters, states had to commit to yearly educator evaluations.) Let me know if you have any more questions about this distinction between the formative assessment and formative evaluation.
-Claire Abbott, DESE

How do IPDPs Mesh with the new Evaluation System?

Q:  How does the Individual Professional Development Plan (IPDP) process relate to the new educator evaluation system?

A:  Regarding IPDPs, the revised licensure regulations allow for educators to use activities in their educator plans to contribute to the IPDPs, and vice versa. Educators and evaluators are encouraged to align the two when possible. That said, an evaluation in no way affects one’s ability to renew their license.

How much Evidence must Educators Submit?

Q:  Will teachers need to collect evidence about their performance for each of the 33 elements included in the teacher evaluation rubric?
A: There is no requirement for the number of Elements on which educators OR evaluators are required to gather evidence. We suggest that educators and evaluators share the responsibility of gathering evidence and ensure that there is some evidence for each Indicator, although the preponderance of evidence is likely to be in the areas of most focus. Finally, evaluators will need to have gathered or have access to sufficient evidence to meaningfully inform his/her professional judgment to determine a rating for each Standard.-Samantha Warburton, DESE
Q:  How much evidence is needed for a single element?
A:  Module 5 has information about this.  In general, evidence should be provided for each indicator (not for each element) and that a product like a model curriculum unit may be sufficient evidence to demonstrate proficiency in multiple different indicators.

Filed Under: Evaluation Tagged With: elements, exemplary, FAQ, indicators, IPDP, proficient, standards

Welcome to the Educator Evaluation Blog

Learning to use the Educator Evaluation System is a journey. We can help ease your travels by providing answers to your questions, alerts about changes, and tips about how to navigate the process.

Subscribe by Email


Categories

  • Counselors (1)
  • Evaluation (71)
    • District Determined Measures (37)
    • Staff feedback (5)
    • Student Surveys (9)
    • teacher rubric (2)
  • Events (4)
  • Examples (8)
  • Health (1)
  • History (1)
  • Physical Education (1)
  • School nurses (2)
  • Science (1)
  • SISP (2)
  • Social Studies (1)
  • Student feedback (1)

Archives

Copyright 2023 Collaborative for Educational Services · All Rights Reserved