Causation and Correlation in Education

In Debates and Polemics, Evidence in Education, Research Evidence by Alex Quigley10 Comments

(Comic via

I have had an interesting fortnight in my role as a school leader and Research-lead. In this job, you get to share a lot of teacher training materials and the like, coupled, or most often, decoupled from the evidence. In just the last couple of weeks I have been repeatedly ‘exposed’ to popular zombie edu-theories that simply won’t go die. Discredited ideas keep bouncing back, recast and relabeled for the promise of a new generation of hard pressed teachers.

I’ve had the ubiquitous learning styles foisted into my inbox. The crumbling edifice that is the ‘learning pyramid’, or cone, or whatever it is branded as. I have seen a fist full of dubious GCSE programmes that proclaim that their evidence will secure the GCSEs of your students’  dreams. Sadly, I really could go on and on.

There is no easy antidote. Working with experts like Professor Rob Coe and Stuart Kime in the RISE project helps. Reading excellent blogs like this one from Nick Rose certainly helps too. Reading the newly created Edudatalab was a boon in this regard. Networks like ResearchEd and organizations like the Institute of Effective Education  provide ballast to still the ship against the rising tide of bullcrap.

What we eventually need is a workforce of teachers who are critical consumers of research evidence and powerfully evidence-informed.

Now, I have a huge amount to learn about research evidence, but one of the turning points in my understanding was when I grasped the difference between correlation and causation (a threshold concept for research evidence):

“Correlation is a statistical measure (expressed as a number) that describes the size and direction of a relationship between two or more variables. A correlation between variables, however, does not automatically mean that the change in one variable is the cause of the change in the values of the other variable.

Causation indicates that one event is the result of the occurrence of the other event; i.e. there is a causal relationship between the two events. This is also referred to as cause and effect.

Theoretically, the difference between the two types of relationships are easy to identify — an action or occurrence can cause another (e.g. smoking causes an increase in the risk of developing lung cancer), or it can correlate with another (e.g. smoking is correlated with alcoholism, but it does not cause alcoholism). In practice, however, it remains difficult to clearly establish cause and effect, compared with establishing correlation.”    (Source: Australian Bureau of Statistics)

This is of course of crucial important in schools. We are constantly being sold silver bullets whose evidence is based on loose correlation (or worse) and nothing like causation. Fundamentally, we must move toward better evaluating what we do. We can ask the question: when do we attempt to put a control group and a treatment group in place for our latest innovations?  To find evidence of causation, which is obviously very tricky, it requires decent controls being in place and a transparent statistical model that doesn’t fiddle the numbers to dredge up a positive result.

Most ‘evidence’ in schools, and education more widely, fails this test.

The debate about evidence and what has value is now part of the educational landscape. The evidence of a randomised controlled trial is matched up against political ideologies and personal prejudices at every step. We are forced to mediate a minefield of information. Teachers don’t know what to believe and therefore they stop listening.

Of course, schools ourselves are guilty of this basic failing when we analyze our evidence. In our punitive accountability model we are not encouraged to honestly evaluate our interventions and their impact. We work backwards: we spent money on X, results improved generally = X caused the improvement and is worth the money. The perils of this lazy correlating pattern is brilliantly exposed by the website by Tyler Vigen aptly entitled ‘Spurious Correlations‘ (thanks Stuart Kime for sharing this gem). Take a look at these two graphs – as they’re graphs, we of course give them credence:


And there is this irrefutable evidence too!


These examples are comic, but the isn’t a quantum leap to our estimations when we evaluate school spending and such like. We buy shiny new tablets, or we create a brilliant brain friendly programme, and – hey presto – students do better. Our new thing is the thing, of course! School leaders and teachers can sink their heart and their next promotion into such interventions – there are potent reasons not to evaluate well and properly seek out causation and not dubious correlation. There are issues with control groups, or the efficacy of trials, but we should approach these head on in the pursuit of better evidence.

When presented with evidence we should question the correlation and causation. When setting up evaluations of our own we need to be mindful of this too. Setting up a new time-consuming intervention, that costs teacher time and students’ curriculum time, must be evaluated better if we really want to go some way to having robust evidence. We all have a long way to go.

Now, I’m off to watch a Nicholas Cage film and eat some chicken!


  1. Alex – great couple of correlations, and a whole load more (and amusing video too) here from Tyler Viglen
    An even deeper (but still highly intelligible) explanation of all the effects around stats, gambling and causation (Hawthorne effect and gamblers fallacy) can be read here –
    Most of this stuff has been taught to undergrad psychologists, biologists and statisticians since the start of the modern era of big data (even though it wasn’t called that back then in the 60s and 70s). Sadly, evidence has never got in the way of politicians wanting to make an impact, hence they have invented stuff such as free schools and levels etc. simply because it suits their purpose. What is really so sad is just how politicised DfE and Ofsted have become over the post 30 years, spinning and mangling (like your bedsheets) what works in education is so many ways. How DfE spokespersons can write their statements stating what works when it does not must make them blush behind doors.

  2. So how do you get a ‘control’ group where the only variable is the one you are investigating? You need the students, teacher, school, timetable slot, weather (?!) and any number of other things to remain constant to get a genuine control group and isolate the variable. Anything else is bedsheets and cheese, no?

    Maybe we need to start educating white mice in a vacuum….

    1. Author

      White mice would be easier!

      We can surely better isolate the variables and get nearer to better evidence than doing nothing?

      – Have the same teacher, or have the teachers co-plan and execute a similar intervention, as close as you can.
      – Group the students according to a simple baseline test; stratify them so that you have more even groups; eliminate outliers from the sample; have a big sample etc.
      – Timetable slot: judge whether that is a useful viable – record it in process evaluation – exerciser judgement. Weather – ditto.

      You can go full bedsheets and cheese – which is our current de facto position – or you can find better evidence, more controlled, accepting the infinite variety of social experience – and get nearer to establishing causation. Triangulating evidence, balancing qualitative judgment with quantitative and making judgements – all alongside some with a handful of wisely selected controls (with a decent statistical model) is surely better than bedsheets and cheese, no?

      I watched a science lesson earlier this week. The 6th form students were conducting an experiment. Those who followed some of the more crucial instructions, like how to use the pipette properly and how many droplets to apply, emerged with clearer evidence. That is not to say their experiments were perfect. Now, my analogy is not suggesting children are to be experimented upon, but evaluation in schools is pretty much devoid of controls. Let’s attempt to do a better job. I find too often that the relativist argument that there are simply too many variables gives an excuse to nonsense evaluations. We are not operating in a vacuum on mice, but that shouldn’t stop up having more method than madness.

      I need to get Rob Coe and Stuart Kime to meet with the Enquiry group – will email them!

    2. Yes, control group ideal, but correlation useful for initial exploration to see if anything seems closely associated with the outcome, can often be surprising and iinformative possibly leading thinking in unexpected directions

  3. Pingback: Causation and Correlation in Education – HuntingEnglish | The Echo Chamber

  4. I have an interest in the universal infant free school meals policy. You will be hard pressed to find a better example of money being thrown at an idea that has little if any statistically sound evidence base.

    As if spending billions on feeding the children of parents well off enough to pay for meals, isn’t bad enough, there is no research into checking if any benefits are realised.

    So not only are we spending billions on a policy without any real evidence, we won’t know if it makes a difference or not.

    It’s a good job they aren’t cutting school budgets whilst persevering with this expensive folly.

  5. correlation does not imply causation, but the cause is always correlated with the outcome
    so if you could correlate everthing with an outcome among the spurious correlations would be one that was the casual relationship (although the correlation alone can never prove this)

    1. Author

      Thanks – really interesting! I really need to work hard at learning all these interesting statistical patterns.

Leave a Reply