For most teachers, the first reaction to the "Effectiveness of Reading and Mathematics Software Products: Findings from the First Student Cohort" report findings is likely to be some variation on, "Of course you can't raise test scores significantly in one year." But for ed-tech bloggers, shouldn't the second reaction be "I always figured that software sucked." I mean, it isn't like I read a lot of people raving about LeapTrack or KnowledgeBox. Isn't our interest in social software implicitly based on a rejection of the stuff they were looking at in this survey?
The software may indeed suck. If so, it might be useful to figure out why. It's entirely possible the software in fact does a perfectly good job of meeting the requirements and specifications that were laid out for the software engineers. So...who creates these requirements and specifications? It usually starts at the business end.
That's right educators and consultants, the finger pointing could be coming back around to you!
Yes, Jim, I'm not trying to blame the programmers.
Yep I know, didn't mean to come off sounding too sensitive.
What the heck, we're used to it anyway. There must be a college computer programming course for it by now: Taking The Blame 101.
Post a Comment