A Cautionary Tale of Educational Evidence

20140212-200118.jpg

Let me first say that I am a big fan of educational research and undertaking trials in schools. Of course, we are not doctors and surgeons dealing with the clear boundaries of sickness and health, or the obvious dichotomy between medicine and placebo. We can, however, design far more robust trials that help us work out what works best in our classrooms. I think we have a moral imperative to do our best to do so. The pursuit of evidence in education could be better and I’m hopeful we can make this happen.

I think that even undertaking the process of controlled trials in the classroom, and attempting to isolate one variable in a fistful of complex variables, has value regardless of the results and we learn much from the process. It makes us reflect upon what we do with acute scrutiny. It can bring together the expertise of researchers with teacher practitioners. Yet, attempting to translate educational evidence from one context to our own unique school environment should be done with wise circumspection.

I have helped design and undertake a small matched trial in our English department last year – see here – and the process and the findings were fascinating. In the process of running the trial I learnt some of the many difficulties that attend the process. I looked at the evidence of Hattie’s ‘Visible Learning’ with new, more critical eyes after doing some research myself.

When I view evidence now, with a working knowledge of how (and importantly by whom) such evidence is gained, I am wary of the ease with which evidence can be skewed or manipulated: lies, damned lies and statistics and all that.

Only this evening I had a cautionary experience with the ‘evidence’. Daisy Christodoulou, Research Director at Ark schools, highlighted the new findings from a broad trial of Deborah Myhill’s ‘Grammar for Writing’ programme reported by the Education Endowment Foundation – see here.

20140212-193300.jpg

This evidence and the attendant findings ran directly counter to the trial evidence offered by Myhill herself and promoted heavily by Pearson in their promotional material – see here:

20140212-193402.jpg

Now, I must commit to my ignorance about the finer details of each research trial undertaken, but the problem is clear. We have one clear approach to pedagogy that offers wide-scale evidence from two large research undertakings that seemingly proves, or at least report, very different conclusions about the success of the approach, or lack thereof.

What are we as teachers to think?

Well, we should be circumspect. We should treat evidence bestowed upon us with a critical eye, without being so beholden to our personal biases as to ignore any value we may derive from such evidence. We should question the ‘how’, the ‘why’ and the ‘by whom’ of the evidence we encounter.

My overwhelming feeling is that we should question research evidence, but that we have even more reason to undertake our own quality research, in our own context, to prove our own practice. No matter how controlled the trial, no other school can perfectly match with our unique and complex school related variables. We should engage with this imperfect evidence and create our own.

13 thoughts on “A Cautionary Tale of Educational Evidence”

  1. After being prompted to read the research from Daisy Christodoulou I replied to her with this comment:

    REPLY
    FEBRUARY 12, 2014
    ALEX QUIGLEY
    It makes for fascinating reading. I thought the early publicity on the Myhill research and the huge effect size in favour of the approach was grossly over-exaggerated. This study evens out the playing field to say the least!
    I do think their lesson schemes have some really useful approaches. Personally, revealing all my biases as well as my school experience, I think that the Myhill approach is useful, but not comprehensive enough. I also think a decontextualised drilling approach helps with writing automaticity, but it can be done badly and can be wholly unmenmorable in the wrong hands. I think the grammar for writing approach is actually most helpful for the required reading skills our students need to exhibit at GCSE and A level. Drawing out the grammar features from a text may well be most useful for reading assessments rather than their writing development. There is some research in that!
    We are therefore creating our own approach. We are developing scheme specific assessment objectives for SPaG – with grammar for writing style elements that links the texts they are studying to essential steps in their grammar knowledge. We will also identify escalating stages of appropriate grammar study for each year of KS3, with drilling where appropriate. For example, our opening year 7 scheme is effectively a language change SOL, therefore it offers the opportunity to engage with Old English etc. and engage with and analyse morphology and orthography in an interesting and memorable way, but we will also embed into the SOL tasks that ensure that students become drilled in understanding these patterns.
    Put simply, we are combining some aspects of grammar style approaches, with some decontextualised drilling that builds up a core knowledge of grammar. We will have end of year tests that can set a high bar and measure progress.
    My personal opinion is that a key threshold concept for our students is understanding syntax and sentence construction. If they can analyse and grasp all the brilliant variations in our rich language and literary history, then that is the core knowledge that will help them to be fluent readers and adept writers. We will repeatedly focus upon the essential construction of a sentence. Incredibly simple and incredibly challenging!
    Thanks for sharing the research. I hope to have a bid accepted for a neuroscience project from the EEF in the coming months. Fingers crossed! I would appreciate any useful resources or research regarding grammar, or anything else really, that you possess or find. I’m always interested in your research and opinions.
    Alex

  2. Dear Alex
    You seem to be arguing that different findings in research mean that it is not very reliable, and that therefore teachers should conduct their own research. I hope that’s not putting words in your mouth?

    In response, I would argue that:
    1. Conducting one’s own research without getting a thorough knowledge of the vast body of literature that already exists is probably futile, certainly inefficient.
    2. Teachers need to be trained, as a standard part of their preparation, to know how to evaluate research, so that they can not only analyse the literature but keep up with new developments.
    3. Research can be conducted using a number of experimental designs which have varying strengths and weaknesses. A RCT is not always the most useful way of finding out what we want to know.
    4. An analytical approach to our teaching can allow teachers to evaluate impact at a very fine level without engaging in formal or even structured ‘research’.

    1. Hello Horatio,

      That reading wasn’t my intention. It was likely more to do with me intended to keep my blog post writing down to twenty minutes, therefore I didn’t explore every avenue of the debate – it was more to capture some critical thinking. My view was that there was different evidence presented over time and it should mean we look critically at research findings for the ‘who’, ‘how’ and ‘why’ etc. of the research. I remember an RCT finding about grammar for writing that was incredibly high. I thought then the finding was dubious and the EEF research appeared much more realistic (being independent helped I’m sure) and that it showed that we should be critical about the research. I was not aiming to convey research is therefore meaningless. On the contrary, I think we should coup duct research to know the methodology better and therefore become better judges of research.

      1. I agree. Reading Hattie et al. Is a great starting point of course. I wholeheartedly value the approach taken by the EEF.
      2. Wholeheartedly agree once more.
      3. Once more, agree!
      4. Yes, most definitely. The reflection required of deliberate practice is essential. Our CPD next year will involve a journal and most teachers will not be directly engaged in research. Some will, however, and if we can work out some quality designs that knowledge could drive a really strong critical appraisal of what we do in the classroom.

      Thanks for your response. It appears we are very much in agreement.

  3. I think you have a problem. You are fully committed to research to inform improvement in teaching and learning, but research carried out (or at least designed) by professional educational researchers has been shown to be very difficult to apply. You need to be clear why you think that research carried out in your school will yield results of more robust worth. Almost, you need to conduct research into what ways of conducting research will have the best chance of generating genuinely useful results. You will probably need a conceptual framework for research in order to even start to work out how research could be effective. I don’t think this is just about the context in your school being different (which it is), but wonder if the sort of things being looked at (in research) are missing the key foci?

    Difficult. But worth grappling with.

    1. Thanks. Yes – my most useful experience was a meeting with Jonathan Sharples, who now works for the EEF, who explored with us many of the issues related to research and how we could connect with the expert researchers to best share our knowledge. I do think if you put the right experts together you can yield a workable and productive conceptual framework. I wholly agree – it is worth grappling with.

  4. Hi Alex,

    Really interesting post. I’m familiar with the original Myhill et al. study from reading the journal articles, and have just had a look through the EEF report. I’d just like to point out a few differences between the studies as the EEF evaluation wasn’t a direct replication.

    Firstly, the original study was three SoWs spread over a school year, whereas the EEF study with Year 6 was only 15 lessons over 4 weeks. So I wonder if this had an impact on the effect sizes.

    The original study was actually a mixed methods RCT – as well as running the trial, the researchers observed lessons and interviewed students and teachers at three points over the year. The qualitative data yielded really interesting insights about students’ metalinguistic understanding – the intervention group were better able to talk about their language choices and the impact on the reader. So it’s a shame that the EEF study wasn’t mixed methods because interview and observational data could give insights into why according to the test scores it didn’t work so well with Year 6 students, who are at quite a different stage in their writing development to Year 8s.

    Also, in the original study teachers were blind to the ‘grammar’ focus of the intervention – it was more about linguistic choices and thinking about the impact on your reader. Given the aversion many people have to ‘grammar’, I’m surprised that the focus wasn’t blind in this study. In the original study, the focus was revealed at the end. The data from the final interview with teachers showed that the majority of teachers talked about grammar in a very negative way – there’s a whole article about this. (Watson, 2013: http://onlinelibrary.wiley.com/doi/10.1111/j.1754-8845.2011.01113.x/abstract) Also, I’m also surprised that in the EEF study each school there was one intervention class and one ‘normal’ class as this could have further introduced bias.

    I think my main point is that while we pay a lot of attention to test scores and effect sizes, they may not tell the whole story. It’s a shame that research evidence isn’t more freely available to teachers, because as you say, it’s important that to have the opportunity to weigh up the evidence for yourself and consider how it might work in your particular context.

    Here are the main journal articles on the Myhill et al. (2012) study, unfortunately behind paywalls:
    http://onlinelibrary.wiley.com/doi/10.1111/j.1741-4369.2012.00674.x/abstract
    http://www.tandfonline.com/doi/abs/10.1080/02671522.2011.637640#.UvyLgpBdWKk

    But you can read the end of project report here:
    http://www.esrc.ac.uk/my-esrc/grants/RES-062-23-0775/read/reports

  5. Pingback: The glamour of grammar | David Didau: The Learning Spy

  6. While I think I’m as suspicious of the next teacher about private companies over-hyping the wondrous effects of their products, programmes and events, I’ve got nothing but respect for Debra Myhill and her work. I honestly can’t see how a 4-week intervention can do anything with grammar and writing, and as you point out in one of the comments above, the EEF evaluation was very different. I can’t therefore see how the EEF study can be compared directly – and in some places so critically – to Myhill’s original study.

    I’m a pragmatist when it comes to grammar teaching and these days most of what I do is at A level and some occasional KS3/KS4 work. We generally have students coming to us at AS who know very little grammar (unless they’ve studied a foreign language), have little grasp of how to analyse language using grammar frameworks and these are students who have been through literacy hour at primary school, done decontextualised grammar lessons at KS2 and then rarely touched grammar again (with a few exceptions in schools where English teachers – often those who have taught A level English Language – do fantastic language work).

    What appeals about the Myhill approach is that it uses grammar to develop a grasp of genre, and builds on the functions of grammar in a way that actually *means* something, rather than just the dry labelling of parts. Realistically, a 4-week intervention is hardly going to be able to scratch the surface of those areas of language analysis.

  7. Just noticed it was Natalie who had queried the comparison of DM’s research with the EEF report, so I should have made that clearer, sorry!

  8. The essential point, that educational research is difficult to carry to a conclusion for a host of reasons, including ethics, ‘Hawthorne’, a host of intervening variables etc etc, is true and important. Every time evidence is presented it should be studied in detail, for when it is,the conclusion, ‘It is so,’ often becomes, more accurately, ‘It probably is so – but take it on board in the context of what else you know.’

  9. Pingback: A Cautionary Tale of Evidence in Education - Part 2 | HuntingEnglishHuntingEnglish

  10. Pingback: This much I know about…the trouble with educational research | johntomsett

Leave a Reply

Follow

Get the latest posts delivered to your mailbox:

Scroll to Top