Most of the disaggregated data in RIDE's new growth data browser is not particularly useful or illuminating because you're mostly looking at segregated schools and districts. One exception to this is disaggregation by grade level, that gives you cross sections of students from each school which should share the same demographic characteristics (generally) within a school.
Play with this. You can hover over or click on the districts and grade levels on the left. Also toggle between reading and math.
Let me just cut to the chase here: what I find peculiar is that there are lots of small districts (3-5 schools listed) where there are 20 to 30 point differences in growth level between different grade levels. And, when you flip between reading and math, the overall span tends to remain the same, but the dots completely rearrange themselves (that is, 3rd grade math might be at 65% growth and 5th grade math at 40% in Smallville Schools, and the opposite in reading).
On the level of an individual small school, you could surmise that this actually reflects a difference in teacher quality -- or random error. The circles really could use some error halos. Once you get up to the larger districts, the differences in growth between grades smooths out.
I can't come up with a hypothesis (other than random noise) for why there would be so much variation among several classes worth of students at different grade levels, within the same small districts and mostly within the same schools, mostly using the same curricula, personnel policies, etc., especially if it is not consistent across reading and math.
I'd also note that the growth model should account for variations in the difficulty of tests from one grade level to the next, and indeed there doesn't seem to be a consistent pattern of, say, low math growth in fourth grade math across all schools.
More raw data would help, in particular multiple years. Considering the importance of this system to teacher evaluations, it would be a good thing for the union to raise a stink about. I don't see any reason we don't have five years worth of data in here, and that'd really show us how stable it is.