Wednesday, November 07, 2012

Nate Silver, Bayes, Teacher Evaluation, VAM, etc.

The whole Race to the Top driven system of teacher and school evaluation is a tall, steaming pile of shit, the wrong premise, wrong theory of change, wrong ideas about human motivation, management, measurement, philosophy of education, the meaning of life, etc. There are a few layers of clean straw in there maybe (yes, student surveys provide useful data, that's why RI has done them annually for many years), but on the whole, it is stinky on top of stinky.

From my outside perspective, however, the most annoying layer was the top one, where some formula turns a bunch of data into a score for the school or teacher. Even if accept everything leading up to that point, and you accept that it is useful to reduce all that stuff to one number or letter or rating, the systems themselves look like they were pulled out of the rectum of some jackass from The New Teacher Project.

Other, more statistically adept bloggers like Bruce Baker and Matt DiCarlo have written about these things in more detail. What's screamingly obvious to me is the lack of serious literature about how this kind of analysis should be done.

Meanwhile, I've been reading Nate Silver's new book, The Signal and the Noise, which centers on the role of Bayesian analysis in his work and its usefulness in general. I'd note that since this is not a book about education, he apparently does not feel the need to try to convince you that he or his friends invented it. A refreshing changes after, say, Paul Tough.

Anyhow, basically, Bayesian probability is good for turning an ongoing stream of noisy data into a high probability hypothesis. Like, for example, turning a sequence of close, high margin of error polls into a forecast with 90% confidence. It should work pretty well for teachers and schools too.

For example, say you have a teacher that you've got a high level of confidence in. Then you get a VAM report with a high margin of error that says she's a Bad Teacher. A Bayesian analysis would conclude that after this single unreliable data point there is still a high probability she's a Good Teacher. Whereas, what we're doing now is just throwing that dubious number in with a few other dubious numbers collected this year and hoping that the errors average each other out. You could also do things like weight the probability of a good teacher having an anomalously bad observation higher than a bad teacher having a good outlier (a likely hypothesis to me, at least).

The thing is, this change is not going to happen, because it would generally emphasize the unreliability of the data and specifically make it harder to get rid of good experienced teachers. That's much more important to the privatizers than accuracy.

Sherman Dorn also has a good post today on Silver's book.

No comments: