Ask a group of teachers in the staffroom about how research evidence can help them teach and you may well receive a rather sceptical response. I can hear the complaints now: ‘more bloody work’…’box ticking’…’don’t give me ‘action research’, mate’…’another shiny new toy’, and worse. No doubt, given our fatigue with the merry-go-round of curricula change and innovation after innovation, you can understand the doubts and the cynicism.
Wedded to this, research evidence faces being tainted by the selective bias of edu-salesmen looking to make a quick buck with a glossy badge, or politicians looking to adroitly spin an argument. Cherry picking of research evidence is of course rife. Who can blame teachers who are tired and wary of being sold the next ‘outstanding’ evidence-based answer to all their planning and marking and pains and stresses.
In a speech at ResearchEd in 2014, Dylan Wiliam led his session, to a room full of people interested in research evidence, with the stark title: “Why education will never be a research-based profession.” Wiliam shares the issues with deploying research evidence and finding causation with pretty much anything in the classroom. He outlines the limitations of research and the factors that skew findings, such as “Studies with younger children will produce larger effect size estimates” and “Studies with restricted populations (e.g., children with special needs, gifted students) will produce larger effect size estimates.”
Rather that pack up all of this evidence business and close the classroom door, Wiliam offers the observation that teachers should be undertaking ‘disciplined inquiry’, not research:
(Slide taken from Dylan Wiliam’s freely shared PPT presentation to be found here)
I’m sure ‘disciplined inquiry‘ could be mangled to mean what we want it to mean, but I very much like Paul Hood’s helpful breakdown of the characteristics of disciplined inquiry:
- Meaningful topics are addressed
- Systematic, clearly described procedures are employed and described so that readers can follow the logic of the study and assess the validity of the study’s conclusion
- There is sensitivity to the errors that are associated with the methods employed and efforts are made to control the errors or consider how they influence the results
- Empirical verification and sound logic are valued: and
- Plausible alternative explanations are considered
I like these characteristics because they encourage teachers to apply their practical wisdom – recommended by Wiliam – but in a more disciplined way that we can typically apply in our busy working week, by using ‘logic‘ and considering ‘plausible alternatives‘.
Can we not then deploy this logic to our evaluation and implementation of new school initiatives and of our teaching practice? By asking better questions of what we do in schools, and taking the time to better evaluate and consider our unique school context, we add greater discipline to our efforts. Perhaps we are not talking so much about teachers acting as pseudo-researchers, but instead we are advocating a disciplined approach to our thinking and actions as teachers.
Perhaps Wiliam would disagree, but I think you could be more ‘sensitive to the errors’ of school implementation by selectively deploying some ‘scientific methods’ when piloting new initiatives: such as asking a precise question about what we want to change; carefully defining a pre and post test; considering having a ‘treatment’ and ‘control group’ where appropriate, and more. We shouldn’t mistake our inquiry with the power of physics, but exercising some controls over our practice can prove useful to teachers and school leaders looking to make the best decisions possible for their students in a devilishly complex context. Such approaches are limited, but they are much better than ‘business as usual’ in schools where we do little more than ‘buy and try’.
We can fend off the limitations of ‘we always do it this way‘ or ‘OFSTED says X‘ by reading research studies, literature reviews and by useful tools like the EEF Toolkit, as long as we develop supports to judge these sources effectively. We can read the findings of cognitive science and psychology, alongside reading experts and exploring their cited sources (dare I say it – hidden in parentheses – we could even read blogs of teachers). By triangulating these different sources, we can likely enhance our sensitivity for our inevitable errors still further.
Maybe then, we should stop calling it research, with all the connotations that attend that term, and instead use ‘evidence informed practice‘? and ‘disciplined inquiry’. Beyond the the wrangle over labels, perhaps the most important thing is to simply think much harder about our implementation and evaluation. If we agree with Dylan Wiliam that being an ‘evidence based profession’ is problematic, perhaps we can be a better ‘evidence informed’ profession?
Want to be better evidence informed or simply find out more? Then Researched York is for you! On the 9th of July, a range of great speakers, including Baroness Estelle Morris, John Tomsett, Phillipa Cordingley, Tom Sherrington, and many more, will talk evidence informed practice…ok – research! Get your ticket HERE.