Evidence Driven Education

In Debates and Polemics, Evidence in Education by Alex Quigley0 Comments

I started the school year talking with my faculty about our success in the summer and throughout the previous year and of course the areas we needed to improve. Upon reflection we could identify some clear reasoning why the successes occurred, hard earned as they were. The reasons primarily centred on having a team of very good teachers who taught very good lessons consistently (to bastardise a Bill Clinton phrase: “It is the teaching stupid!“) – which is backed up by hard-nosed lesson judgements.

This was bolstered by our effective managing of data and our concentration upon the things that mattered that we could control, like controlled assessments (I think we are all agreed that is a dirty word that we will be happy to be rid of soon enough). What struck me, particularly having read John Hattie’s ‘Visible Learning for Teachers‘, was that we had some experienced ‘intuitions‘ about why we did well, some hard data and some soft data, all leading to the same conclusions, but that to continue to replicate that success we needed to be much more systematic with our evaluations and our evidence.

I wanted us to focus even more closely on what happened in the good lessons that made them consistently good – after all, that is point of what we are trying to do isn’t it – get better at teaching. I wanted to explicitly know this so that regardless of what assessment models or curricular systems are imposed upon us in the coming year, or even future years, we could still teach great lessons consistently: essentially “keeping the main thing the main thing“.

We also reflected upon the truth that there is no ‘one size fits all formula’ for good teaching, but that did not stop us analysing the evidence of high impact strategies. Knowing that, and what interventions worked best, would be a crystallisation of all the answers we need. It takes effort, but done properly the rewards are huge (I don’t underestimate such effort when we are all pushed to the limits to do the job well – perhaps schools should have their own ‘Delivery Units‘ to to do the job across the school?)

I therefore wanted to make a concerted attempt to think with both the heart and the head (or the ‘Elephant’ and the ‘Rider’ to unite my previous post) and to source the best evidence possible for great teaching. That included the epic meta-analyses of John Hattie and his team. I am in no doubt of the efficacy of this type of research; it is essential for medical research and it does add value to educational debate. I was, and am, however circumspect at the same time.

Instinctively I asked how could individual teaching strategies be fairly judged within school contexts when there are a whole host of other factors at work simultaneously – a complex web even the sharpest of minds would struggle to delineate. For example, how could the success of ‘questioning‘ be fairly judged when at the same time ‘teacher subject knowledge‘, ‘class size‘ and a whole host of other effect sizes are at work? Therefore, I surmised that such data is imperfect. Yet, the more I read, the more I couldn’t avert myself from the fact that this was still the best way to source the answers about what made good teaching and good interventions effective. What is key is that the sheer scale and selectivity of the trials improved the quality and accuracy of the data, and continues to do so.

The ethical basis of ‘testing’ on children is a valid objection to such an evidence based approach. The idea of a ‘control group‘ not having an intervention instinctively sits uneasily with me. In practice, with a technology trial for example, it is hard to deny a class the right to use technology which may be most appropriate with a task. However, when I thought about it, I considered that many interventions can actually have a negative effect, or be a distraction, so it was a case, once more of thinking differently, thinking more scientifically and less emotionally. I am simply not in my job to be unethical, quite the reverse.

What would be unethical would be to not undertake RCTs (randomly controlled trials) and to instead base our teaching, our interventions, indeed our entire system upon a hunch, or on a personal basis, solely based on ideology. When we move educational policy at break-neck speed, we are likely to take in unnecessary risks, which I deem wholly unethical.

What we cannot do is simply rely upon the ‘noble myth‘ (described by Plato as well meaning, but flawed reasoning to perpetuate comfort for the greater good) of our intuitions alone, however experienced we are. Our well meaning, but flawed emotional response to ‘what has always worked‘ for us is always going to be too narrow in scope, too bound with our own emotional bias to be sufficient.

We need to focus in on the practice and the pedagogy, which often means stepping back from the personal. Yes, we are all emotional beings (thankfully so – the best of us often being those most in touch with their emotional intuition), who teach with our head and heart, but we must reflect and make adjustments and plan improvements with as scientific an approach as possible if we are to properly define what is good teaching. As an English teacher, this strikes against some of my natural instincts, stemming from the Romantic ideal of individual genius and the power of emotional intuition to find ‘the answer‘.

Of course, any one source of evidence is too narrow if we have the opportunity to source more evidence from a variety of methods. The best answers, as I have stated, are to be found when we have the greatest breadth of the evidence: including hard data (ideally through rigorous control based trials), but also including soft data – like student voice and teacher feedback – and our personal and professional intuition as experienced experts – those aforementioned ‘noble myths‘.

What is crucial is that we have policy makers, school leaders, subject leaders or teachers who do not unthinkingly implement changes based upon statistical evidence, provided by the likes of Hattie, without taking a full account of the unique context of their country, their school and their students. This would be foolhardy and it is a valid concern levelled at evidence led policy that we must address.

School leaders, for example, will be sold snake-oil by gurus looking to sell their foolproof educational wares based on what they present as the most rigorous of evidence (of course that data will be flawed and manipulated, as such data can be). Some leaders are simply looking for a quick fix to their problems, when quick fixes don’t exist in schools! Therefore we must question the methodology behind the evidence and weigh up the factors impacting upon the evidence – again, taking a more scientific approach.

So what is to be done? In our faculty it is about trialling strategies and becoming more systematic about that trialling. It is about sharing our good practice and our good pedagogy, but crucially then evaluating its impact in a more rigorous fashion. In all honesty, our current evidence does not stand up to the scrutiny I outline above. Therefore it is important that we so the hard work to make this so. Focusing with utter consistency upon the pedagogy and the practice…and the evidence of impact.

This model of sourcing better evidence to justify change needs to replicated at school level and even on a national level. This is happening, but typically away from view. The Education Endowment Fund is currently running trials across a thousand schools in Britain to source evidence to direct policy – read this fascinating research on school interventions for instance: Teaching and Learning Toolkit. The debate is happening and policy people in Whitehall are listening: listen to Ben Goldacre’s brilliant analysis here on how evidence led policy is being undertaken in Whitehall (the education focused section begins around the twenty seven minute mark – including debate about phonics teaching).

We must challenge the many ‘noble myths‘ that attend our educational discourse and source as must high quality evidence of impact as we can.

Leave a Reply