RIDE's Office of Instruction, Assessment, & Curriculum confirmed to me that there is no technical reason why they cannot include historical NECAP data in the new Rhode Island Growth Model Visualization Tool. Basically, it does take some time to prepare the data, and they had other priorities.
I can't imagine it takes that much data munging, since it seems like it should all be derived from NECAP data sets, and one would hope that data would be relatively consistent over the years.
And while it seems like an arcane, geeky point, it is something that both the unions and all people interested in the new teacher evaluation system not being a bogus fiasco should care about.
For large sample sizes, the Growth Model numbers make sense, seem very consistent when dis-aggregated by grade level or compared between math and English. Smaller sample sizes are increasingly puzzling (as you would expect). Look at Little Compton. I think this is one classroom. Maybe two. In grade 3 reading, they're in the 63rd median growth percentile. In grade 3 math, the 38th. That's presumably the same teacher in both subjects. I don't know what the facts on the ground are, but with a few more years worth of data, it would be a lot more clear. Maybe the third grade teacher in Little Compton is just bad at math? What's up with 7th grade in Little Compton having an 84 median growth percentile in reading and 41 in math? Is that consistent over time? New teacher? Noise?
If we had six years worth of historical data expressed in student growth percentiles in this graphical tool, we'd all have a pretty good idea of how stable and meaningful the numbers are.
If this is a good method of evaluating teacher performance, RIDE should prove it to us by including the historical data.