RI SIG SEA Application (FINAL):
Element 1: School-wide Performance in Reading and Mathematics
Element 1 is based on school-wide student performance, (all students) in mathematics and reading for the 2008-09 school year. Element 1 identifies those schools with reading and math proficiency rates significantly below respective state-wide average performance. This element uses one and two standard deviation units below the state average to determine each school‘s score points as follows:
- 8 points were assigned when overall school performance was more than two standard deviations below the state average. Schools more than two standard deviations below in math had between 0% and 6.6% proficient students and in reading between 0% and 34.4% proficient.
- 4 points were assigned when overall school performance was between one and two standard deviations below the state average. These schools‘ proficiency rates in math ranged between 6.6% and 29.5% and in reading between 34.4% and 51.2% proficient.
- 0 points were assigned when overall school performance was less than one standard deviation below the state average.
Element 2: NCLB Classification
Element 2 identifies schools based on 2008-09 AYP classifications. Schools were assigned score points as follows:
- 2 points were assigned to schools under restructuring
- 1 point was assigned when schools failed to meet AYP for two or more consecutive years
- 0 points were assigned when schools either met AYP or failed to meet AYP for less than two consecutive years
Element 3: Student Growth or Graduation
Element 3 is based on a Student Growth Percentile to measure individual student progress for elementary and middle schools. For high schools, graduation rates were used in lieu of student growth percentiles because growth measures were not possible. Student growth and graduation rates are based on data from the 2007-08 and 2008-09 school years for all students. This element identifies those schools whose median percentile growth is typical or lower than the state average. Elementary and middle schools were assigned the following score points in reading and math:
- 2 points were assigned when median growth was below the 40th percentile.
- 1 point was assigned when median growth was between the 40th and 60th percentiles.
- 0 points were assigned when median growth was above the 60th percentile or when the school proficiency rates for math or reading were above state averages of 52% and 68% respectively.
Rhode Island was able to use its two most recent years of graduation results to contribute to this element. This is because it moved to the NGA cohort formula and was able to calculate this rate for the first time with the graduating class of 2007. Rhode Island has no Title I eligible high school with a graduation rate below 60%. High schools were assigned the following score points based on 2007-08 graduation rates:
- 2 points were assigned when the school‘s graduation rate was more than one standard deviation below the overall state average of 73.9%. Schools more than one standard deviation below the state average had graduation rates that ranged from 0% to 57.4%
- 1 point was assigned when the school‘s graduation rate was between the overall state average and one standard deviation.
- 0 points were assigned when the school‘s graduation rate was higher than the overall state average or when the school proficiency rates for math or reading were above state averages of 52% and 68% respectively.
Element 4: School-wide Improvement in Reading and Mathematics
Element 4 is based on differences in school-wide student performance for all students in mathematics and reading between the 2005-06 (Test results for high schools were not available for the 2005-06 school year. For high schools, therefore, results from 2007-08 were used in lieu of the 2005-06 results.) and the 2008-09 school years. Element 4 identifies those schools with improvement in reading and math proficiency rates significantly below respective state-wide average improvement. This element uses one and two standard deviation units below the state average improvement (State average improvement was determined by calculating the difference between 2005-06 and 2008-09 school- wide percent proficient in math and reading.) (Math = 6.6, Reading = 8.6) to determine each school‘s score points as follows:
- 2 points were assigned when the difference in school performance from 2005-06 to 2008- 09 was more than two standard deviations below the state average. Schools more than two standard deviations below in math had a decrease in performance greater than 8.7 percentage points and in reading had a decrease in performance greater than 8.1.
- 1 point was assigned when the difference in school performance from 2005-06 to 2008-09 was between one and two standard deviations below the state average. These schools‘ decrease in performance in math ranged between 1.1 and 8.7 percentage points and in reading between 0 and 8.1 percentage points.
- 0 points were assigned when the difference in school performance from 2005-06 to 2008- 09 was less than one standard deviation below the state average or when the school proficiency rates for math or reading were above state averages of 52% and 68% respectively.
I've avoided opening a new obsession in trying to reproduce the calculations used to designate our "lowest-achieving schools," as it would be an easy way to lose a week of my life, raise my blood pressure, and achieve nothing else. Nonetheless, having tracked down at least a description of the process in RI's SIG grant application a few weeks ago, I might as well make it a little easier to find.
I could try to nit-pick this, but it isn't really detailed enough. I would say that while it does a reasonable job of figuring out what the low-performing schools are generally (hint: their students are poor), it doesn't seem to be designed to pull out fine-grained distinctions about which of the low-income schools would benefit most from restructuring. And it is weighted toward old data.
And really, from my perspective, all you really need to know is that we were one of the first states, if not the first, to pick their "lowest," on January 11, 2010, which was only three weeks before we had "new" data from tests given in October 2009, primarily reflecting teaching and learning from the 2008-2009 school year and before.
To me, that just reflects a fundamental disinterest in accurate, timely data.
And by the way, if you look at the SIG grant proposal, they do use the data from the "2009 - 2010" school year as the baseline for improvement, so if like Central Falls you went up in the latest data, that didn't help your rating as "lowest-achieving" but you're stuck with a higher baseline. On the other hand, if you went down last year, like Cooley High School, that makes it easier for you going forward.
10 comments:
"It doesn't seem to be designed to pull out fine-grained distinctions about which of the low-income schools would benefit most from restructuring."
Can you elaborate on this point? I'm not sure I understand what you're saying should have been done.
Aside from just looking at more data -- writing scores, five year graduation rates, college enrollment, SALT data, etc., the scoring is very chunky.
For example, 8 points if you're just below 2 SD's under average, 4 points if you're just above. Or for that matter, 4 points if you're just below 1 SD below average. I SD is a big range.
That's out of a total of 14 possible points.
Is there any kind of theory or rationale for that kind of scoring?
While those are general critiques, and I appreciate that clarification, I still don't understand what data you're saying should be used to determine, "which of the low-income schools would benefit most from restructuring."
The "rationale" and "theory" question isn't too challenging, IMO. They weighted the issues they were most interested in and used a system which would distinguish the absolute poorest performers in the state and nothing else. There goal wasn't an accurate rank order of all schools so much as it was to clearly separate out the schools that were the poorest performing based on their priorities. While there are always MANY problems at the margins when using a sharp cutoff like that (and one hopes that they double-checked the end result at the margins), when the goal is simply to identify folks that clearly fall on the bottom based on your priorities, a finer grading system is unnecessary.
The right amount of information for the question you're asking, yah? A bit crude with numbers but they're only looking to make a crude, rough distinction.
That's a crock and you know it Jason. They weighted the issues that they were required to by law.
No person actually believes the priorities expressed in those criteria. When the new AYP statuses came out, RIDE downplayed them. Nobody really believes that in ELA reading counts for everything and writing counts for nothing. Nobody reallybelieves that being 2.1 SD's under average is that different than being 1.9 SD's under. Nobody really believes that an inner city kid graduating in five years is no better off than one dropping out. RIDE seems bound to get rid of NECAP as quickly as possible anyhow.
This is just crude, but if that's good enough, why are we going to be putting millions and billions into new data systems and assessments?
"That's a crock and you know it Jason. They weighted the issues that they were required to by law."
And the law/regulations were written by folks that made the decision that those were the priorities.
"No person actually believes the priorities expressed in those criteria."
Someone wrote it for a reason.
" When the new AYP statuses came out, RIDE downplayed them. Nobody really believes that in ELA reading counts for everything and writing counts for nothing. Nobody really believes that being 2.1 SD's under average is that different than being 1.9 SD's under. Nobody really believes that an inner city kid graduating in five years is no better off than one dropping out."
Of course not, but you have to draw a crude line in the sand somewhere. So you're basically saying use writing scores as well and identify schools that are within the 95% confidence interval because any time a line is drawn the folks who are close to that line are very similar? Makes it hard to identify a set of schools, no? Sometimes we're lucky and the data has natural breaks, but sometimes you have to make a choice at the margin. Just like you said "no one believes a school that's 1.9SD is different from a 2.1SD" I'd have to ask what makes a teacher in the bottom 5% of VAM different from one that's in the bottom 7%. You have an inconsistent standard for these things.
"RIDE seems bound to get rid of NECAP as quickly as possible anyhow.
This is just crude, but if that's good enough, why are we going to be putting millions and billions into new data systems and assessments?"
Because we can't go too much deeper than this with NECAP. So we can use some crude measures to make some crude decisions at the margins. But if we want to gather more, better information with greater nuance to make more complex decisions we need better info.
No, we have all the data I referred to above. We have five year grad rates, writing scores, precise reading and math scores. We already have much better data than was used. We just decided not to use it.
FOR NO REASON.
This is a mindless bureaucracy stumbling forward.
You know, we should end this exchange because you agree with the point I've been making, which is essentially that, "it does a reasonable job of figuring out what the low-performing schools are generally."
You just haven't and aren't going to answer how to describe a school that "would benefit most from restructuring," or what to do with those schools which are low-performing and won't benefit from restructuring.
But "generally" is only good enough when you're flying over at 50,000 feet. It isn't good enough when you live here.
Look at high schools in my neighborhood.
We had one school that essentially executed a full turnaround before being named a tier 1 school.
We had another that's been on a long slow slide that was named a tier 1 turnaround. Score! They're probably a good candidate for a reboot.
A third has over 95% students in poverty, some of the best teachers in the district and pretty good scores, considering. They're tier 2 and apparently their reward will be to be merged with school #2 above, which has a much weaker culture. They could use some support, or just a less segregated student body, but drastic measures or just mixing them with a much weaker school might make things worse.
A fourth is a school which was assembled as an overflow school a few years ago out of a faculty and student body that nobody else wanted and a pretty good principal. For various political reasons (obscure to me) it was made a permanent school. Not surprisingly its had horrible graduation rates (46.7% this year! Below Central Falls!). Considering the grab-bag of shit they started with as a faculty, there could be no better candidate for firing everyone and starting over. But they're tier III.
I could go on. We've got a big high school on the West End that has been pushed under the rug for a decade aside from absorbing the teachers pushed out of other turnarounds. Everyone knows they need a kick in the pants of some sort. I guess they'll get it since they're tier II but really, they're worse off than some off the tier I schools. Central Falls would be a better than average Providence high school.
Of course, there's math. But RI has a great diversity of high schools (from The Met to Mount Pleasant) serving high populations of low-income students, with a wide range of scores in ELA, grad rates, etc., but they're all uniformly stuck at the bottom in math. OK, Times2 is a math/science focused charter with 35 11th graders and their proficiency rate is a stunning 26% (iirc)). Thus, while there is a math problem there's no evidence that a whole school solution is needed to fix math, or even likely to work.
But "generally" is only good enough when you're flying over at 50,000 feet. It isn't good enough when you live here.
Look at high schools in my neighborhood.
We had one school that essentially executed a full turnaround before being named a tier 1 school.
We had another that's been on a long slow slide that was named a tier 1 turnaround. Score! They're probably a good candidate for a reboot.
A third has over 95% students in poverty, some of the best teachers in the district and pretty good scores, considering. They're tier 2 and apparently their reward will be to be merged with school #2 above, which has a much weaker culture. They could use some support, or just a less segregated student body, but drastic measures or just mixing them with a much weaker school might make things worse.
A fourth is a school which was assembled as an overflow school a few years ago out of a faculty and student body that nobody else wanted and a pretty good principal. For various political reasons (obscure to me) it was made a permanent school. Not surprisingly its had horrible graduation rates (46.7% this year! Below Central Falls!). Considering the grab-bag of shit they started with as a faculty, there could be no better candidate for firing everyone and starting over. But they're tier III.
I could go on. We've got a big high school on the West End that has been pushed under the rug for a decade aside from absorbing the teachers pushed out of other turnarounds. Everyone knows they need a kick in the pants of some sort. I guess they'll get it since they're tier II but really, they're worse off than some off the tier I schools. Central Falls would be a better than average Providence high school.
Of course, there's math. But RI has a great diversity of high schools (from The Met to Mount Pleasant) serving high populations of low-income students, with a wide range of scores in ELA, grad rates, etc., but they're all uniformly stuck at the bottom in math. OK, Times2 is a math/science focused charter with 35 11th graders and their proficiency rate is a stunning 26% (iirc)). Thus, while there is a math problem there's no evidence that a whole school solution is needed to fix math, or even likely to work.
Post a Comment