Dan DiMaggio's article in the Monthly Review, The Loneliness of the Long-Distance Test Scorer, is good, but more importantly a reminder that I never wrote a review of Todd Farley's book Making the Grades: My Misadventures in the Standardized Testing Industry.
Unfortunately, writing decent book reviews takes too much time for this blog, but this is a good read. It is breezy and particularly funny if you've actually done your time doing low-end office temp work yourself and know the array of freaks, losers and weirdos who inhabit in that world. I also scored essays as part of Rhode Island's old writing assessment program, and spent a bit of time with data and assessment wonks in state and local government, so I've got direct experience in the milieu, and everything Farley says rings true.
It isn't very sticky in terms of policy rhetoric because the comeback is "and that's why we're spending $350 million dollars to design new computer-scored tests." The one clear takeaway is that human-scored constructed response question are not inherently better than multiple choice. They just introduce a different, more opaque set of problems.
As it turned out, I read Making the Grades just as one of the first Common Core Standards drafts came out, and it really shaped my response to them. In every case, the Common Core ELA standards were idiosyncratically different from the international standards they were supposedly benchmarked to. In every case, the Common Core version was simpler, narrower, and be easier to explain to addle-headed temps inhabiting Farley's book or a computer than the international comparisons. Given that the process was driven by the testing companies, I don't think it is complicated or secret enough to bother calling it a conspiracy.
No comments:
Post a Comment