RIDE's student growth percentile (SGP) browser has been updated with this year's test score data. This is important because SGP's are the basis of all the new data-driven ratings of schools and teachers. Having two years of data allows one to get a sense of the overall volatility. Of course, RIDE has the data to extend this back five or six years, which would make it a lot easier to judge the validity of these scores against actual experience, but they've decided not to load that data into the system.
It is hard to make sense of it in aggregate, and the details seem fairly idiosyncratic. A lot of whole schools are jumping around 10-20%, which seems pretty volatile.
The most useful and interesting thing you can do is break down elementary or middle schools (no high schools) that you are familiar with by grade level and see if that patterns make any sense in terms of individual teachers or cadres of students. Do the big swings in growth scores correspond to actual changes on the ground, or is it the same teachers doing the same things with (supposedly) different results?