Focusing on Feedback

In The Confident Teacher by Alex Quigley7 Comments

And the evidence says… Feedback is the answer!

It is a truth universally acknowledged, that a teaching wanting to help their students learn effectively must give lots of feedback. Marking, lots of it, should get to the root of many of our problems. We just need more of it, right? This easy assumption neglects a fundamental flaw in our fail-safe plan: we don’t really know what we mean by effective feedback, nor are we very clear about and how it works best.

The power of feedback

We can identify the impact of effective feedback by surveying the mass of available evidence. The power of feedback is heralded in the Education Endowment Foundation (EEF) Teaching & Learning Toolkit – a usable summary of the best of educational research all aggregated into one handy catalogue. It proudly sits atop their graph of school interventions – promising a great deal of progress in learning.

Such strong evidence signals the efficacy of deploying feedback, but crucially, it still goes no further in telling us how best to do it. The best definition and summary of feedback, for me, is from Hattie and Timperley (2011), entitled, ‘The Power of Feedback. The whole complex topic is distilled into three practical questions that must be answered by the student: ‘Where am I going?’ ‘How am I going? And ‘Where to next?They go on to explain how feedback is so much more thansolely about correctness”.

Dylan Wiliam is the long-time uncrowned feedback Tsar. In his paper on Keeping learning on track: Formative assessment and the regulation of learning, he too focuses upon what is most important: what our students thinking and doing. Wiliam advocates “sharing criteria with learners and student self-assessment” to help our students be clear where they need to go and to help them in “monitoring their own progress towards that goal”. Wiliam is a fan of good peer and self feedback and your should be too.

Perhaps most surprisingly, Wiliam also gives a stark warning about the dangers of giving feedback. He cites that “two out of every five carefully-controlled scientific studies, giving people feedback on their performance made their performance worse than if they were given no feedback on their performance at all!”

The dangers appear to lie when we give feedback that encourages comparison with others. Grading or scoring students’ work can actually distract from the useful diagnostic feedback we give. In short, feedback can easily stop students thinking for themselves, or worse, it can threaten their fragile ego and cloud their judgment, effectively blocking learning.

We should loudly eschew the accountability-driven clamour for the teacher marking everything that moves and instead focus on what our students are thinking and doing with feedback. Indeed, we can too easily consider written feedback as the job of the teacher, when it is what the student does with it that is what really matters. Yes, teachers can plan to give students DIRT (Dedicated Improvement and Reflection Time), but the onus should be on the students to do the hard thinking.

The pot of gold at the end of the rainbow is high impact written feedback, well embedded into the classroom routine, so that students think hard and engage with the feedback. Not only that, teachers won’t be crushed with the burden of piles of unnecessary marking.


(First published in a similar draft in the TES)


  1. Pingback: Is Successful | Pearltrees

  2. Brilliant post! Your suggestion of embedding written feedback into the classroom routine itself is really interesting. This should encourage student to, as you said, actively engage with the feedback. Using an interactive whiteboard to demonstrate the process of marking can be a really helpful class activity too, so that the pupils will gain a stronger understanding of the criteria their teachers are looking for in their work – and it’s also interactive!

  3. Pingback: Favorite Edtech and ELT Posts for November – American TESOL Institute

  4. I don’t have a problem with feedback as such – nothing you say about it seems at odds with my practice – but I am worried that what is spoken of by education researchers as ‘evidence’ just isn’t. The EEF toolkit is based on this notion of ‘effect size’ which other researchers seem to say doesn’t measure educational impact at all. Even the big ‘evidence based education’ people like Cheung & Slavin note that effect size varies with sample size and with whether you use a standard test or one a researcher makes up ( and Simpson really knocks the EEF toolkit to bits by showing that the effect size is just something which a researcher chooses ( Feedback looks like it has a big effect size because it is easy to study, not necessarily because it has a big educational impact.

    1. Author

      Hi Fred, I’d defer to Professor Higgins from Durham University on this. Effect sizes are no perfect statistical measure of course.

  5. Pingback: The Feedback 'Collection'

Leave a Reply to Francesca Cancel reply