In Monday’s post, I went through the DC teacher evaluation system, IMPACT, which weights value-added improvements in student scores at 50 percent of the teacher’s evaluation, with the remaining half of the evaluation covering 22 areas (fit into 9 categories). Five classroom observations are held,
three times by a building administrator and twice by an outside "master evaluator" who is a subject-matter expert and does not report to the building administrator.
Teachers in tested subjects are evaluated by standards different from those used for paraprofessionals, counselors, special education teachers and others in the system, with teachers in non-tested subjects having only 10 percent of their evaluation based on student scores.
IMPACT also was developed without the buy-in of teachers unions. How does IMPACT stack up to the just developed Massachusetts teacher evaluation system? For Massachusetts’ Education Secretary Paul Reville, the Commonwealth has developed a “bold, pioneering teacher-evaluation system,” but is it?
The fact that the Massachusetts Teachers Association is supportive of the state’s new teacher evaluation system is not in and of itself a criticism—but when it is hard to get any grasp whatsoever as to the percentage of the evaluation to be based on improvements in student’s academic achievement, well, that should give us pause. And the fact that the National Education Association, the MTA’s parent organization, has maintained its opposition to the use of student standardized tests for the purpose of evaluating teachers should at least raise one's suspicions.
While the new rules are positive, they leave it up to individual school committees to decide just how much emphasis will be placed on the tests when it comes to judging a teacher's performance. This serves to make teacher evaluations an open question from district to district. We suspect there will be some foot-dragging, especially where teachers' unions remain a powerful force in electing school-board members and affecting policy…
We would hope that all school districts would adopt the new rules in full, giving MCAS test scores the major weight they deserve in the evaluation process.
The T&G goes further into the weeds of the regulations, and finds that the “new rules for educators seem made to disorder”:
The state’s new regulations for the evaluation of educators… establish that MCAS test results will play some role in teacher evaluations; they state that student and teacher feedback are to be included in the evaluation process, eventually; and they allow for the inclusion of existing measures of progress at individual schools or in districts.
But those points don’t arrive until three-quarters of the way through a 20-page thicket of definitions, standards and indicators, most of which are painfully obvious, vaguely phrased, repetitive, or offer little specific guidance to educators. And the regulations never state exactly how much weight MCAS will have, exactly how teacher and student feedback will be factored into evaluations, and who is to decide whether a district or school’s existing evaluation process is good enough.
In fact, the regulations lay out 16 “indicators” for teacher standards in the areas of Curriculum and Planning, Teaching All Students, Family and Community Engagement, and Professional Culture. There are 20 such “indicators” for administrators, reaching into every conceivable area of day-to-day school management.
From their performance on these many indicators, educators are to be classified and graded, assigned one of four overall ratings — Exemplary, Proficient, Needs Improvement, or Unsatisfactory — and then placed on an action plan, or, in the case of competent educators, largely left alone.
It isn’t clear to us how any of this will help districts rid themselves of bad teachers any more quickly, ensure such teachers aren’t passed around within or between systems, or, on the positive side, facilitate the recruitment, promotion and rewarding of excellent teachers.
We were hoping for a far more succinct, specific and clear set of expectations that would promote accountability and excellence. Instead, by virtue of their length, complexity and open-ended language, these new educator evaluation regulations strike us as an excellent way to create more work and worry for administrators and teachers, while ensuring plenty of new grist for the wheels of bureaucracy that revolve at the state Department of Education.
If it were up to us, wed declare these new regulations “unsatisfactory,” take an eraser to the whole blackboard, and start over.
I think the T&G nailed this. Fact: The 1993 Massachusetts Education Reform Act states clearly that evaluations were to be occurring for the past 18 years. Fact: By far the majority of school districts have never done this in a consistent way. The Boston Foundation released a report last year saying what we all know – that in Boston regular teacher evaluations just don’t happen. Reports from the now near-defunct office of educational accountability show that Boston was not an outlier in not performing regular evaluations. That's unfair to teachers and one hope I have with the new regulations is that they will at least kick-start the process of giving teachers feedback on their performance. Not performing evaluations is a long term dereliction of public responsibility that falls squarely on the laps of the department of education, superintendents, principals, and the school committees.
A few additional takeaways:
- These hazy regulations will have all the teeth of a nonagenarian’s bite. Furthermore, the tie-in to MCAS data will only occur three years from now, which is important because the state’s department of education has already promised the federal government to do away with MCAS by 2014, the year by which they will adopt national tests (still under development). The new national tests are to be based on the new weaker national standards. Important for researchers, the changeover to new tests will break the continuity of data we have had via the MCAS. No more apples to apples comparisons of how we are doing vs. previous years.
- The Massachusetts Tests for Educator Licensure (MTEL) will prove more important for teacher quality than the evaluations in raising, long-term, teacher quality. (Also, in a state that frequently has to battle to keep the MTEL in place, these evaluations will not add up to very much.
- I think the state is starting from the wrong place. Evaluations should first be applied to managers—to principals, central office personnel, and superintendents—before applying the system to teachers. After all, how many superintendents of large urban districts have shown steady improvement in student achievement? Moreover, after years of not fulfilling the 1993 law’s call to evaluate teachers, why should teachers trust that these new evaluations will be done consistently and with objectivity? However tough the IMPACT evaluation system may be (with 5 percent of the DC teacher corps losing their jobs), at least that system provides a framework for consistent use and objectivity.
Expect the School Committees and superintendents to vary greatly in how they use the new Massachusetts evaluation system. Expect the same kinds of professional development to continue, without aggressive targeting of resources to make up for teacher weaknesses. Expect little impact on student outcomes.
The author is solely responsible for the content.