Monday, August 20, 2012

The Text Complexity Question Is Not About Me

Susan Ohanian notes that in a new supplement to Common Core Appendix A, the National Governors Association and the Council of Chief State School Officers more or less conclude that all the half-dozen or so popular quantitative measures of text complexity are "close enough for government work," and provide a handy equivalence scale.

In particular she notes that this seems to implicitly (or explicitly) give a stamp of approval to Accelerated Reader's software's system of assigning texts to readers based on text complexity. Also, Renaissance Learning (creators of Accelerated Reader) is an Endorsing Partner of the Common Core State Standards.

This is all getting a little to Inside Elementary Literacy for me, but I'm confused about whether or not Common Core advocates like Kathleen Porter-Magee and Susan Pimentel are equally as concerned about the assignment of "just-right" texts assigned by computer's running corporate software as they are about the soft-hearted hippies teaching balanced literacy.

I would tend to prefer the latter to the former, but overall the whole text complexity question strikes me as a classic false dichotomy crossed with technocratic overreach.

I suspect that most reformers are ok with computers assigning texts to individual students based on complexity (but not teachers), but that there is probably also a subgroup that thinks that a few recommendations in the appendices will keep the software companies in check. If so, they're dreaming. They might be able to get the hippies fired though.

The thing that gets me worked up is the idea that the Common Core Standards themselves have anything to say or the matter. They clearly don't.


Jason said...

I tend to agree with you on this whole text complexity question (if I understand your point well enough).

1) The tests are grade level tests so its pretty hard to believe that anyone thinks they are going to demonstrate results/success spending all of their time assigning reading at a lower level.
2) The goal is grade-level texts and this has obviously always been clear. I mean what the hell does grade-level text mean if not "this is where we think a student should be to maintain pace with what is expected going forward".
3) It's probably true that some methods are better at assuring grade-level performance than others. That may mean that you're better off exposing kids to far more difficult texts straight through; it may mean ramping them up using texts that are "just-right". I don't know the answer, but this seems like an empirical question that we can just wait and see the results of. Even once (or if) the question is answered, its not clear to me that standards should address this (as opposed to curriculum, education, and PD which is where specific pedagogy may be more appropriate).
4) Isn't grade-level just about typical performance at typical rates for a typical student kind of thing? Expectations are really just the prediction for the most typical case (at least in my mind). So to me, it's all nonsense anyway. With a good CAT that's not limited by a stupid restriction about grade-level content, you can be more precise about keeping track of kids on their individual trajectories which may be faster or slower at different points of literacy attainment. The red flag shouldn't be "atypical", it should be "near the point of being so atypical that catching up is too rare".

Tom Hoffman said...

Yeah, I know that wasn't the clearest post.

I'm not an expert on the voluminous reading research, but it seems pretty unlikely that anything as simple as "just make the kids read grade level texts all the time" is going to turn out to be the answer. Or, for that matter, the opposite.

The whole thing seems like an intra-literacy business knife fight to me.