Educator Evaluation Beacon

  • Home
  • District Determined Measures
  • Student Surveys
  • Contact Us
  • Educator Evaluation at CES

Meet the Educator Evaluation Beacon!

February 6, 2013 by Damon Douglas

Learning to use the Educator Evaluation System is a Journey.

We can help ease your travels by providing:snow-mtn-trek

Roadside Assistance: We answer your questions about the Educator Evaluation System and post frequently asked questions.

Travel Alerts: We keep you up-to-date about new DESE resources and expectations.

Itinerary Planning: We post tips about how to streamline the process and pitfalls to avoid. We let you know about the evaluation system trainings we’re offering.

Travel Stories: We share fruitful experiences, practices, and ideas for implementing the Evaluation System.

Filed Under: Evaluation Tagged With: DESE, Educator evaluation system, ESE, evaluation system trainings, implementing

Travel Stories: How are folks making Ed Eval work?

December 20, 2012 by Damon Douglas

In order to make the new system of evaluating educators work– to improve instructional practices in classrooms– three ingredients are needed. These same ingredients– time, trust, and guidance– are required elements in most change initiatives. Let’s look at how they apply to implementing Massachusetts’ new system of educator evaluation.

clock_devonseaTime is needed to learn anything, and the complexity of this new system demands a lot of attention. Local districts have chosen to provide extra time to teachers in several ways. Pioneer Valley Regional Schools, with support from their teacher’s union, extended the state’s recommended due dates for submitting specific forms. The extra time made it more likely that teachers would complete a realistic assessment of their strengths  and also created opportunities for teachers to align their own SMART goals with team, school, and district goals. Another district allotted a portion of common planning time for drafting team goals together; a third devoted its extended days to crafting educator plans. Providing extra time increases the likelihood that implementation will be thoughtful and effective, and, perhaps more importantly, reduces anxiety.

In all our evaluation system trainings for administrators, we address the time issue head-on, providing models for how admin teams can build work time into their action plans.

Coming soon: The importance of trust and guidance

 

Filed Under: Evaluation Tagged With: Educator evaluation system, evaluation system trainings, instruction, trust

SMART Goal Examples

December 18, 2012 by Damon Douglas

View examples of high-quality SMART goals we’ve collected in the course of our trainings here: SMART Goal Examples 2

Please feel free to contribute to the list by replying below.

Many thanks to Paul Bocko and Beth Graham for collecting these examples.

Filed Under: Evaluation, Examples Tagged With: Educator evaluation system, professional practice goal, SMART goals, student learning goal

Travel Alert: New Measures Take Effect in Fall 2013

December 15, 2012 by Damon Douglas

If you’re following the roll-out of the Massachusetts Educator Evaluation System, you already know that, in its current form, the system requires two forms of evidence– data gathered through observation and educator-submitted artifacts. In 2013-14, the Evaluation System requires that districts begin gathering student survey and student growth data. (Note: the use of these data in educator evaluation will require a return to the bargaining table.)

Student Growth Data   State regulations require the use of two measures of student growth for all teachers. Those educators currently teaching an MCAS or ACCESS-measured course are required to use one of those state assessments. Meeting this expectation will be especially challenging in untested subjects like art, music, and physical education. CES will lead a region-wide approach to building measures of student growth in these untested subjects. More guidance from the state about these new expectations can be found in Part VII of the state’s model system.

Student Surveys  These surveys may be electronically administered and apply to students in grades K-12. The regulations can be viewed here. Little guidance is available about how the survey results will be incorporated in the evaluation process; Kim Marshall proposes a reasonable strategy for doing so in a recent Education Leadership article (Note: full article access requires ASCD membership).

 

Filed Under: Evaluation Tagged With: artifacts, Education Leadership, evidence, Kim Marshall, student growth data, student surveys

Roadside Assistance: Answers to Your Questions

December 10, 2012 by Damon Douglas

Here we share answers to questions that have been raised during our Educator Evaluation System trainings. DESE’s project leads, Claire Abbott and Samantha Warburton, have been extremely responsive to our questions– they usually provide answers in 24 hours, and sometimes within an hour!

What Does Proficiency Look Like?

Q:  What does proficient performance “look like”?  What exactly would you expect a teacher to be doing?  To what extent DESE will define what things might “look like?”

A:  DESE will provide no additional guidance around using the rubrics to define proficiency. That work will have to be done between evaluator and educator.

Is there a Limit to the Percentage of Teachers Rated Exemplary?

Q:   Is there a percentage-based limit to the number of teachers that can be rated exemplary– any truth to this? (or needs improvement, for that matter)

A:  No—there is no percentage-based limit associated with any performance category.  The regulations place no numerical targets or requirements on the number of educators in each rating category.

Performance Ratings at the Mid-cycle Conference?

Q: A local superintendent asks, “If the mid-cycle conference between evaluator and educator is to be truly formative, no performance ratings should be conferred. Are evaluators required to arrive at ratings at the mid-cycle meeting?”
A: Your local superintendent asks an important question. Yes—the mid-cycle conference is “formative” and intended to be a point where evaluators and educators touch base, check progress on goals, and make any mid-course adjustments to a plan if necessary. That said, there’s an important technical distinction between the formative assessment and the formative evaluation that relates to ratings:
-A formative assessment occurs mid-way through the cycle for educators on plans that are one year or less in length. No ratings are required for a formative assessment.
-A formative evaluation occurs mid-way through the cycle for educators on 2-year plans, so presumably, this would take place in May or June. Ratings are required for a formative evaluation, but they default to the educator’s prior Summative Rating unless there is evidence suggesting a significant change in practice by the educator (in which case an evaluator could actually issue new performance ratings and change the educator’s plan). The default rating is designed to alleviate the burden on evaluators from having derive a rating for every educator on a yearly basis. (The reason behind the rating requirement for formative evaluations is that in order to meet the federal RTTT parameters, states had to commit to yearly educator evaluations.) Let me know if you have any more questions about this distinction between the formative assessment and formative evaluation.
-Claire Abbott, DESE

How do IPDPs Mesh with the new Evaluation System?

Q:  How does the Individual Professional Development Plan (IPDP) process relate to the new educator evaluation system?

A:  Regarding IPDPs, the revised licensure regulations allow for educators to use activities in their educator plans to contribute to the IPDPs, and vice versa. Educators and evaluators are encouraged to align the two when possible. That said, an evaluation in no way affects one’s ability to renew their license.

How much Evidence must Educators Submit?

Q:  Will teachers need to collect evidence about their performance for each of the 33 elements included in the teacher evaluation rubric?
A: There is no requirement for the number of Elements on which educators OR evaluators are required to gather evidence. We suggest that educators and evaluators share the responsibility of gathering evidence and ensure that there is some evidence for each Indicator, although the preponderance of evidence is likely to be in the areas of most focus. Finally, evaluators will need to have gathered or have access to sufficient evidence to meaningfully inform his/her professional judgment to determine a rating for each Standard.-Samantha Warburton, DESE
Q:  How much evidence is needed for a single element?
A:  Module 5 has information about this.  In general, evidence should be provided for each indicator (not for each element) and that a product like a model curriculum unit may be sufficient evidence to demonstrate proficiency in multiple different indicators.

Filed Under: Evaluation Tagged With: elements, exemplary, FAQ, indicators, IPDP, proficient, standards

  • « Previous Page
  • 1
  • …
  • 12
  • 13
  • 14

Welcome to the Educator Evaluation Blog

Learning to use the Educator Evaluation System is a journey. We can help ease your travels by providing answers to your questions, alerts about changes, and tips about how to navigate the process.

Subscribe by Email


Categories

  • Counselors (1)
  • Evaluation (71)
    • District Determined Measures (37)
    • Staff feedback (5)
    • Student Surveys (9)
    • teacher rubric (2)
  • Events (4)
  • Examples (8)
  • Health (1)
  • History (1)
  • Physical Education (1)
  • School nurses (2)
  • Science (1)
  • SISP (2)
  • Social Studies (1)
  • Student feedback (1)

Archives

Copyright 2023 Collaborative for Educational Services · All Rights Reserved