There really is a striking progression of data loss in this analysis. Most generally, as discussed above, IFF uses proficiency rates (how many students above or below the line), which ignores underlying variation in actual scores. In addition, they’re using cross-sectional grade- and school-level data, which masks differences between students in any given year, and over time. Then they use the rates (and projected rates) to calculate rankings, which ignore the extent of differences between schools. And, finally, the rankings are averaged and schools are sorted in quartiles (performance “tiers”), losing even more data – for example, schools at the “top” of “Tier 2” may have essentially the same scores as schools at the “bottom” of “Tier 1.” At each “step,” a significant chunk of the variation between schools in their students’ testing performance is forfeited.
Monday, February 06, 2012
Don't Pretty Much All the Systems for Determining "Persistently Low Performing" Schools Have this Problem?
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment