I don't have a lot of hope for a system that sees learning largely as a function of time or time of day, rather than as a function of good instruction and rich tasks. It isn't useless. But it's the wrong diagnosis. For instance, if a student's clickrate on multiple-choice items declines at 9:14 AM, one option is to tell her to click multiple-choice items later. Another is to give her more to do than click multiple-choice items.
It seems to me that the low hanging fruit here might not be relying on this kind of data to teach better, but to increase test performance. If the learning software tells the data warehouse that a student will score highest on an assessment of standard 3 administered Tuesday at 10:00 in the morning, if the subject of the prompt is basketball, and the answer is multiple choice, is there any reason the high stakes test couldn't or shouldn't use that to increase (or decrease) the student's score (in math or ELA)? If we're going to have high-stakes embedded, adaptive assessments, this all gets pretty blurry.