As a result, my feeling is that the ed-tech world should converge quite aggressively on a set of anonymized-data standards, and spend quite a lot of effort explaining to various management types that the data is great for comparing teaching methods, on an aggregated basis, or working out which technologies are getting the most enthusiastic uptake — but that it should not be used for comparing teachers, on an individual basis.
About 10 years ago I found myself working on a large foundation funded project to improve the "information infrastructure" of schools. This was headed by some of the biggest names in educational research and development in the country. I had a somewhat ambiguous and idiosyncratic role in this project, representing an actual in-school practitioner. I ended up doing a lot of pretty much self-directed research and development into the kind of data standards Salmon is calling for above.
The direction the project took -- regardless of its original intention -- turned out to be entirely based on the principal investigators' prior research and, even moreso, on the perceived desires of Gates, Hewlett and other big foundations (working in the word "literacy" turned out to be crucial). This is, of course, the way such things always work. It was clear that none of the PI's or foundations were actually interested in "information infrastructure," so I put that line of inquiry aside.
Had, say, Tom Vander Ark, understood the need to invest in that kind of R&D, they'd be vastly further along in their path to data-driven nirvana.
In retrospect, I guess I'm glad for their lack of vision, but I still find the whole thing incredibly annoying.