Diving into the Bowels of Junk Science
Let’s take a dive into the bowels of the portion of the junk science used to evaluate teachers in New York State.
Fifteen to twenty percent of a NYS teacher’s evaluation will be on a mutually agreed upon local assessment chosen form a list of ‘non-local assessment tools that were pre- approved by the state. My district chose the STAR Reading and Math assessments by Renaissance Learning. The teachers in my district had 24 hours to decide on this plan and voted overwhelmingly on the blind faith that this was the best we could get. Now that we are at the mid-year point and our students are taking their mid year tests, used to evaluate us, many teachers still have no clue as to how these STAR assessments will be used to determine their fate.
I was able to obtain a copy of the STAR user manual and after reading it, I’ve determined that it should be thrown into the pile of other teacher evaluation tools that have been based on junk science. Let me present the evidence.
My first clue that something is amiss was when my class took their opening assessments in September. The results were all over the place. My weakest readers ranked near the top of the class and some readers reading levels were way too high, or too low. The data just didn’t match what I saw in my classroom. Observing the students taking the test I noticed that the multiple choice questions only had 3 choices! Thereby increasing the odds of guessing and getting a correct answer.
I questioned the results and my district administrators really had no answers at that time. I also questioned whether of not a 5th grade student who scored at a 10th grade reading level would show growth if I didn’t teach 10 grade reading strategies. We all took a wait and see approach, let’s see when they take the mid year tests.
So that’s what we did, and that’s where we are today. My class took the mid year tests and most showed growth and a few didn’t . My weakest kid ( he receives intervention services) is now even higher than he was in September (ranked 3rd) and according to STARS is reading 3 grades above his original level. Amazing huh?
Still concerned I now questioned, how much growth will prove I’m effective as a teacher? Where is that finish line in the race to nowhere. We need to go to my districts APPR plan for that information.
Did you notice that a SGP score would be generated for each student and compared with those of their peers, whoever they may be. In order to be deemed effective, my students must on average beat 40% of their peers who are also showing growth. I guess that’s where the race aspect of education deform comes in.
I couldn’t believe it so I went to the STARS manual and found this. paragraph under the description for SGP.
Again, more evidence, that it’s a race. Even though this student grew more than 35% of his peers, it’s not enough in this race. Irregardless of his individual circumstances, no mention of a student with special needs, ELL, or any other special circumstances. So what’s the magic number that shows a teacher is highly effective?
Stars sets, by default, 40% as the benchmark for growth. That means a student must score in the top 60% of his peers in order to show growth. Keep in mind that means the entire group has grown and only the top 60% of those who have grown have actually demonstrated growth! Seems absurd doesn’t it.
Looking back at my district’s APPR plan, the average SGP of a teacher’s students must be over 60 for that teacher to be highly effective, That means a the teacher’s students must grow at a rate that beats the growth of 60% of their peers. More racing here!
Where’s the science here? Where is the evidence that 60% or 40% should be the benchmarks? Are 5th grade teacher’s now expected to teach 11 grade literature to students who initially test high?
Looking at my student’s I am baffled how a 5th grade student could have results that place her at the grade equivalent of 8.5 in September and than the same test has her dropping down to a grade equivalent of 7.0 a mere 3 months later. Did I cause her to lose ground? If so how does that explain a similar student in my class who went up the same amount, considering they received the same instruction?
I’ve come to the conclusion that the more we look into the bowels of the junk science that is now being used to evaluate teachers under the guise of education reform we see nothing other than BS.
After all isn’t this reform movement nothing more than bull.