Wednesday, March 23, 2011

The (Lack of) Correlation Between Observation and Test Data

Bill Turque:

Teachers with high “value-added” scores--meaning that their students met or exceeded predicted growth targets on the DC CAS--didn’t necessarily do well on the Teaching and Learning Framework (TLF), the nine-part test of classroom skill that is at the heart of IMPACT. “At this early stage in the use of value-added analysis nationally the hope is that there is a strong correlation between a teachers’ score on an instructional rubric and his or her value-added score,” Curtis wrote. “This would validate the instructional rubric by showing that doing well in instruction produces better student outcomes.”

But that isn’t quite the case. A DCPS analysis showed only a “modest correlation” between the two ratings (3.4).

I guess I shouldn't be surprised that technocrats would think they'd be able to come up with an observation system that highly correlates to test scores, but every teacher knows that whatever edict comes down from the central office is as likely to hinder learning in your classroom as help it.

Indeed, one premise of this whole effort is that eventually we'll be able to use value-added analysis to test the efficacy of all kinds of things, like curricula. Which of course means that we assume that sometimes teachers will be ordered to do things that are ineffective and lower their scores.

I know that's obvious, but apparently it needs to be pointed out.

1 comment:

Jason said...

Worth reading: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.172.8045&rep=rep1&type=pdf

The paper has been published now, but the final version post peer review is behind a pay wall.