I have to wonder whether or not it is an intentional strategy to keep public discussion of the Common Core ELA standards off the standards themselves. First we had everyone discussing whether or not there would be a required reading list when there never is in a standards document. Now it is all "percentage of fiction vs. non-fiction." That also is not determined by the standards.
In particular, if we have an over-emphasis on fiction, I've seen no evidence that this is directly attributable to current ELA standards, particularly at the secondary level. If kids aren't reading enough non-fiction in history class, the idea that this is directly attributable to the current ELA standards, and will be fixed by the new ones, is a joke.
And the main reason that writing assessments over-emphasize nebulous personal essay topics is not because the testing companies are run by hippies. It is because you can't assess a student's writing if they don't know the answer to the question posed by the prompt. This is not a philosophical problem, it is a practical limit in standardized testing.
1 comment:
Also worth noting: if we're going to have writing assessments scored by humans, there's the issue of who those humans are. How qualified are they, how motivated, how interested in assessment. The insider stories about scorers in the big testing companies should give everyone pause regarding the quality of those assessments. The people themselves may be fine and dandy, but the stories about score manipulation and "re-calibrations" raise serious questions. Now, if you want writing assessments that actually require content area knowledge, the scorers must have that expertise as well. It can be done (AP, IB), but it's gonna cost more.
Post a Comment