Let me first say that I am a big fan of educational research and undertaking trials in schools. Of course, we are not doctors and surgeons dealing with the clear boundaries of sickness and health, or the obvious dichotomy between medicine and placebo. We can, however, design far more robust trials that help us work out what works best in our classrooms. I think we have a moral imperative to do our best to do so. The pursuit of evidence in education could be better and I’m hopeful we can make this happen.
I think that even undertaking the process of controlled trials in the classroom, and attempting to isolate one variable in a fistful of complex variables, has value regardless of the results and we learn much from the process. It makes us reflect upon what we do with acute scrutiny. It can bring together the expertise of researchers with teacher practitioners. Yet, attempting to translate educational evidence from one context to our own unique school environment should be done with wise circumspection.
I have helped design and undertake a small matched trial in our English department last year – see here – and the process and the findings were fascinating. In the process of running the trial I learnt some of the many difficulties that attend the process. I looked at the evidence of Hattie’s ‘Visible Learning’ with new, more critical eyes after doing some research myself.
When I view evidence now, with a working knowledge of how (and importantly by whom) such evidence is gained, I am wary of the ease with which evidence can be skewed or manipulated: lies, damned lies and statistics and all that.
Only this evening I had a cautionary experience with the ‘evidence’. Daisy Christodoulou, Research Director at Ark schools, highlighted the new findings from a broad trial of Deborah Myhill’s ‘Grammar for Writing’ programme reported by the Education Endowment Foundation – see here.
This evidence and the attendant findings ran directly counter to the trial evidence offered by Myhill herself and promoted heavily by Pearson in their promotional material – see here:
Now, I must commit to my ignorance about the finer details of each research trial undertaken, but the problem is clear. We have one clear approach to pedagogy that offers wide-scale evidence from two large research undertakings that seemingly proves, or at least report, very different conclusions about the success of the approach, or lack thereof.
What are we as teachers to think?
Well, we should be circumspect. We should treat evidence bestowed upon us with a critical eye, without being so beholden to our personal biases as to ignore any value we may derive from such evidence. We should question the ‘how’, the ‘why’ and the ‘by whom’ of the evidence we encounter.
My overwhelming feeling is that we should question research evidence, but that we have even more reason to undertake our own quality research, in our own context, to prove our own practice. No matter how controlled the trial, no other school can perfectly match with our unique and complex school related variables. We should engage with this imperfect evidence and create our own.