What makes research evidence useful for teachers?

In Educational Research, Evidence in Education by Alex QuigleyLeave a Comment

(This blog first appeared on the Huntington Research School blog – take a look HERE for more. You can also sign up to the monthly newsletter HERE.)


January is a time for new resolutions and no little resolve. Join the gym, read some books, mark the schoolwork on time. For the busy teacher, perhaps there is the resolution to read some research evidence, making some positive changes along the way?

Only the reality of teachers regularly accessing research evidence is too much like the January gym-membership: a couple of appearances then it is back to the comfort of the settee. Now, perhaps we can feel a little guilty at abandoning our best laid fitness plans, but few teachers would – or indeed should – feel any guilt at not digging into research evidence beyond the resolve for a few sporadic explorations.

Until research evidence in education is made more useful to teachers – written for them, with windows of time to access and explore embedded in CPD – it will usually be dropped as quickly as a January gym class.


What makes research useful and useable?

Stuart Kime, Director of Education at Evidence Based Education, recently shared his top five picks for ‘The Best Educational Research of 2017 for Schools’ in Schools Week (of course, this mimics a crucial aspect of making research evidence useable – make it accessible and do some of the finding for teachers). The research that struck us at Huntington was the link to US research, entitled ‘What makes research useful for public school educators?

The research, by Neal et al., offers a handy shortlist of factors to consider regarding how teacher access, interpret and use (or not) research evidence, based on Everett Rogers’ ‘diffusion of innovation’ theory. Put simply, how do people, like teachers, respond to new innovations or evidence:

  • Relative advantage: “The degree to which an innovation is perceived to be better than the idea it supercedes”
  • Compatibility: “The degree to which an innovation is perceived as being consistent with the existing values, past experiences, and needs of potential adaptors”
  • Complexity: “The degree to which an innovation is perceived as difficult to use”
  • Observability: “The degree to which the results of an innovation are visible to others”
  • Trialability: “the degree to which an innovation may be experimented with on a limited basis”                             (Rogers, 1995)

Now, simply replace the word “innovation” with the phrase ‘research evidence’. We are then offered a handy framework upon which to reflect on the usefulness and usability of research evidence in and for schools.

Let’s take a worked example: ‘setting by ability’ student groupings. The evidence may suggest that mixed attainment groupings are more effective for a wider range of students. Crucially, however, such evidence may not have the relative advantage over ‘setting’, as teachers perceive such groupings to be easier and there is less flak from parents in the main. Is it compatible? Well, ask a few teachers about the ‘setting versus mixed attainment’ debate and you’ll likely receive some strong views on both sides. Indeed, mixed attainment is seen as harder, so without training, it is written off. It isn’t easy to trial and change, and so the status quo most often wins out.

We come to an impasse regarding research evidence use and usability. If we do not offer a raft of support factors for teachers to do this well, it will be done badly, or even more likely, not be done at all. Like most resolutions that attend the new calendar year, they become best laid plans, binned by Valentine’s day.


Making research evidence useful and used by teachers

 So what are the essential support factors for research evidence use? Let’s use Rogers’ framework once more:

  • Relative advantage: what makes our evidence better, or at least equal, to the views of our respective peers? What is better about this independent evidence? Does this evidence offer us steps for improved practice for students?
  • Compatibility: Does this research challenge long-held teacher beliefs (this may be good, or it may get laughed out of town!)? If there is a mis-match, how do we bridge that gap?
  • Complexity: Is the evidence in an accessible style? Is it written for teachers, or laced with academic jargon for academics to read? Can the theory be translated into classroom action? Are there tools and resources to enact the evidence easily, faithfully and well?
  • Observability: Can I make sense of the evidence in my school context? Are there concrete examples I can observe, use and compare? Can I get to grips with this in CPD time and talk about it?
  • Trialability: Can I translate this evidence into action? Do we find time to plan, practice and reflect on our trials (and tribulations)?

Useability hexagon

Considering these questions is not just something for policymakers or academics thinking about putting evidence into practice. School leaders need to grapple with how teachers interpret, use and diffuse our insights and evidence.

Teachers are a hard-working and well-meaning bunch (we aren’t in it for the big bucks, as is patently obvious), but we are not stupid. We guard our time, and our students, with precious care from wasteful ideas and policies. We will only access, interpret and enact research evidence if it is careful crafted with our needs and problems in mind. If we don’t have this in mind, then teachers accessing research evidence will go the way of a million gym memberships.


Related Reading:

Educational Research in Practice: Predictors of Use.

Conceptions of evidence use in school districts: mapping the terrain.

Leave a Reply