I’m a huge fan of data. I think anyone who argues against it doesn’t understand. No, data should not trump our decision-making and professional decision-making. Yes, qualitative data are valid. Yes, intuition is still helpful. That being said, I think there are certain pitfalls that come with using data in education. One lies in the domain of “indicators,” or general outcome measures, or GOMs as data nerds like to call them. Here’s the problem – indicators help us keep track of overall progress toward a goal, and are generally statistically connected to big picture questions like, “Can this child read?” For example, oral reading fluency (ORF) is a relatively “small” behavior, but is connected to some big picture things, and even helps us answer seemingly unrelated questions like, “Can this child actually comprehend what she reads?” The pitfall, though, is that sometimes we think indicators are not just our main points of assessment, but also of intervention. We assume that if just fix the indicator, that the underlying problem will go away too. For example, with oral reading fluency, it’s easy to think that, if just get the child reading faster, he will do all of those other things better too, like comprehend. In short, if we “fix the indicator,” we’ve fixed everything the indicator speaks to. This, unfortunately, is not usually true. In other words, while ORF does help us understand general reading ability, fixing or improving ORF does not mean we’ve fixed overall reading ability. True, we may have fixed part of it, but not necessarily. Case in point – this article citing research that shows kids from struggling schools attend more often, but demonstrate behavior problems. Here’s how this is related: It seems that, a while ago, schools picked up on the indicator that attendance led to lower performance on tests, contributed to higher behavior problems, and lower overall school rankings. So, they did what many reasonable people would do – they fixed attendance. Now, we’re seeing the problem with the “fix the indicator” mentality – attendance was a proxy for overall school performance, but it wasn’t the causal factor. It itself was a symptom of a lot of other problems, like school engagement, behavior problems, supportive home environments, etc. So, schools ended up targeting an indicator, fixing an indicator, but realizing that the indicator was just a symptom – another effect of more deep-seated problems. As I alluded to in the previous paragraph, it may be helpful to consider this implication from the angle of statistics 101 – the idea of causation, correlation, and spurious variables. If we think of cause & effect chains as a chronological sequence of events that impact each successive event, with branches of causations leading to different effects, we might find that indicators aren’t at the core or point of origin of the cause & effect chain. In fact, the indicator may be toward the end of the chain – a mere byproduct or outcome of things that came before it. But, for whatever reason, the indicator happens to be connected to a lot of other variables. In the non-education world, for example, take weight or BMI. Weight is itself a cause of some problems, but symptomatic of a LOT of other problems – not as a causal agent, but as an outcome. Higher weight is a related to a lot of things, both as a cause & effect. If we measure weight, we take a shortcut (in a good way) toward monitoring overall health. However, we can’t just assume that if we just lose weight, we’ve fixed all health issues. Maybe some, but not all. So, circling back around to the referenced article, school attendance is a good indicator because it helps us identify potential kids that might need extra assistance. But, once identified, we can’t just fix the indicator and assume the true underlying problems will just vanish.