The recent run of posts on the "Math B" exam in New York on JD2718 (one starting point) give some good perspective on how this stuff often looks to a perceptive teacher on the ground. A lot of time either the curriculum you're given sucks, the tests suck, and/or they don't actually align to each other. Other times, you get a good test. Back in the day, I thought the New Standards Reference Exam was pretty good, but I think it was the SAT 9 the school department started giving out in the "off years" from state testing. Man, that was awful. They should have just handed out crossword puzzles.
Data-minded reformers who are half-way paying attention know this by now, and want to spend a lot of money on better tests, standards, and data systems for tracking the results, so that then they can layer performance pay, etc. on top of all that, and finally as a result, student achievement will go up. I don't see where the increased capacity to write and score better tests is going to come from though. This is an honest question. Why do we think we can write more and better tests? If the terms of the argument include "if we simply focus on something and spend more money on it, it will get better," then can't we focus on something more direct than tests? Why can we improve testing but not teacher preservice education, for example?
Thanks for picking this up.
In New York State, the State Education Department used to employ a Math Bureau (maybe just 3 or 4 people) who took care of this. Professionals.
Now we have an associate, (or 2?) who do not write the exams, and do not seem conversant with the content of the courses we teach.
Test writing is broken into about 5 - 8 small pieces, each using teams of teachers brought in just for that task, and just once. No one with content and testing knowledge accompanies the instrument from beginning to end.
And they are covered. They say that teachers wrote, edited, amended, etc. And no one is responsible for the final product.
Post a Comment