Edison committed serious resources, talent, and energy to the effort and as authorizer of the two schools we did our best to understand what Edison was doing and to gauge whether or not the blended learning effort was working for kids. Frankly, this was hard for us to do because the only data we had to validate the effectiveness of the program were state test scores and data shared with us by Edison. The state test scores showed mixed results while the Edison data showed student achievement trending upwards in both schools. After two years of serious commitment, Edison largely moved on from the E2 effort in Dayton. We came away disenchanted from the experience because we couldn’t find solid evidence of student learning gains. We also came away appreciative of just how hard and expensive it is to integrate digital learning experiences and opportunities into the academic program of high-need urban schools, and how difficult it is to create viable accountability models for such programs.
Smells like an attempt to create a sustaining innovation. Not that there's anything wrong with that.
You generally don't have evaluations of disruptive innovations since they spread somewhat unexpectedly, but if you really wanted an evaluation of disruptive innovation on its own terms, you'd be looking for an outcome like, "It wasn't as good, but it was good enough, spread like a weed and was really cheap!"