So I went to the public meeting in Boston for feedback and expert commentary on the Race to the Top assessment program. This is the "$350 million to get people to shut up about the fact that our assessments aren't currently good enough to serve as the foundation for the other $4 billion we're putting into RttT" part of the program.
Writing my comment was difficult, because you don't want to sound like a random obsessive compulsive crank, but obviously only an obsessive compulsive crank would show up at one of these things by his or her own free will. So I took a hyper-bureacratic and hyper-technical point of view, and actually ended up with something that I really like. It doesn't directly address my overall anger and dissatisfaction with the whole endeavor, but ultimately it served as some kind of emotionally satisfying, if oblique, performance art. To me, at least.
Anyhow, when I got there I immediately ran into David Niguidula, and a few minutes later the two of us were having breakfast with Linda Darling-Hammond, with David and LDH chattering away about digital portfolios and performance assessment. So immediately my expectations for the utility of the trip were greatly exceeded.
Subsequently, I sat through the long expert panel on Technology and Innovation in Assessment. It was a long miasma of boredom. Tom Vander Ark should have been forced to sit though it. I certainly didn't leave feeling like we're on the cusp of a revolution.
One thing that was talked about quite a bit was the problem of "comparability." What if your online assessments consistently produce different scores than your paper ones. How do you know if it is a bug or a feature? I'm happy to allow this to be someone else's problem.
Eva Baker from CRESST did talk about "ontologies" in assessment, which I anticipated after looking at some of her other presentations, so that helped me make my pitch in five minutes without getting hung up on trying to define the term.
For the public input section, there were about 10 of us with reserved five minute slots: David, Larry Burger from Wireless Generation, someone from some other vendor, some guy from Connecticut who made an open source/Moodle/Elgg pitch, some grad student from the Education Department at Brown who made me think they must have added a new program in being an arrogant asshole, and the balance people advocating for specific kinds of accessibility and accomodations.
Larry's talk was interesting; he talked about the need to accommodate more agile development strategies, finding the balance between openness and proprietary solutions, and the problems created by complex procurement laws. I managed to get my talk in exactly on time. Larry said "nicely done," and Dr. Baker was pleased that someone other than her was talking about ontologies for once, and didn't seem to mind that I'd tweaked CRESST a bit.
I stuck around for LDH's afternoon presentation on international perspectives on high school assessment. Her line of argument strikes me as airtight and devastating, striking right at the heart of the whole "competitiveness" premise for reform. The school systems around the world that are outperforming us (supposedly) simply aren't anything like the one that "reformers" are advocating.
When you read about the Department of Education "standing up to the establishment," understand that in practice this means "ignoring comprehensive, authoritative arguments from the established experts in the field."
Also, the thread of open source conversation was largely "We know what it is, we know we want some, but how much?"
"The school systems around the world that are outperforming us (supposedly) simply aren't anything like the one that "reformers" are advocating." That is EXACTLY the point. People have used those other school systems as justifications for just about any shiny new reform strategy--with little regard for what actually happens in those countries.
Post a Comment