Here's a story about a school I worked in many years ago. Their test scores were very high. One year, all 3rd graders got a perfect score in reading, but the numbers for writing were weaker. In the following years, the district used this information to launch a required school-wide focus on writing. We gave the kids the message that they were weak in writing, and needed to improve. We also paid for a few released test items ($7 for one student’s response to one item), to analyze these weaknesses. We realized that the students were all proficient and advanced on the actual writing samples. It was the multiple-choice questions, where they had to choose an answer with no context, where they lost points. What did this tell us about the data we had used to “drive” our instruction for several years? They knew how to apply the rules to actual writing, but their actual “weakness” was choosing between tricky multiple-choice answers on isolated rules for writing. Should we spend less time teaching them to be writers, in order to work on testing strategies for multiple choice? Should we drill them on memorizing the rules? What would this do for them, in their lives as writers? What effect would this kind of teaching have on their skills and motivation for real writing?
Tuesday, March 31, 2009
Something For The Next Time Someone Asks You What's Wrong With "Teaching To The Test"
Posted by Tom Hoffman at 9:53 AM
Subscribe to: Post Comments (Atom)
Well, the example actually doesn't really show what's wrong with teaching to the test -- it shows what's wrong with stupid tests.
A test of writing should look like students writing.
The idea of having multiple choice questions on such a test is stupid. I trust teachers and administrators to evaluate writing; I don't at all trust them to accurately break it down into component skills or rules that can be evaluated in themselves (the number of fundamental errors contained in everything from the elementary school definition of parts of speech to the style manuals pushed on high school students is staggering).
So the question is, who made the test in question and how did it ever get through the state system in question? Why didn't their heads perk up when they noticed a lack of correlation between the multiple choice questions and the open response questions, given that the multiple choice questions were presumably supposed to evaluate the component skills that go into the higher order task evaluated in the open response.
But all of this is a question of how to make a good test, not whether to teach to it. My guess is the multiple choice questions are there because they are relatively cheap to evaluate. But if, in some cases, it turns out that effectively testing the skills we want to ensure we're teaching is prohibitively expensive, then that's a logistical problem rather than a philosophical one.
Still, I'd rather make a compelling argument for effective assessments than argue against assessment altogether.
Good assessments necessarily change teaching in good ways, making teachers teach real skills and avoiding all the little bits of faking that can inflate the grades of students who do what they're told but aren't actually learning core skills. Bad tests change teaching in bad ways, encouraging more faking (i.e. more teaching of skills that have only to do with the artificial test and not to do with the applicable skills and knowledge).
The problem isn't teaching to the test, it's having the wrong one.
I'm not arguing against assessment altogether.
But the high-stakes tests are often bad (and the lower-stakes diagnostic tests are sometimes even worse), and it isn't like we don't have experience making them, and it isn't like we aren't spending money on them, and it isn't like we don't regard them as the foundation of all other work done in schools. Yet they still often stink.
Lots of things work in theory. We don't need to judge whether or not the tests are good in theory. We can look at the practice. If we can do better how will we do it?
Why aren't we training an army of psychometricians? Do you want to become a psychometrician? Didn't think so. Nobody does, which is one big reason our tests aren't that great.
Yeah, maybe that's the issue. Who are the psychometricians?
Isn't it at least possible that knowing how to better design tests would also create better teachers?
But I guess really my point is that the accountability push got us lots of standardized tests. It seems to me the smart response is to say: yes, assessment determines teaching. And guess what, your assessments are creating a lot of crappy teaching, so we need to change them.
That seems a more winnable case to me than trying to stop the tests, which seems like a losing battle.
Maybe the real problem is that we've tried to believe that we can push standardized testing on all schools without pushing (at least to a degree) a standard program.
I guess I think we necessarily do push a standard program when we push a standard assessment, because some set of people will necessarily start teaching to the test... which means we should all start to know a whole lot more about how these tests get made...
I don't know. It just all starts to feel like a plan for making sure that collective farming works in the next five year plan. There is no reason it can't work in theory...
Post a Comment