How often have you heard the words “research-based” used in your presence? Are you tired of hearing them? If you often drop those words in a conversation, are you sure you are using them correctly? Have you read the research, or just merely looked at the Cliff Note’s version of it?
Unfortunately, there are too many educators who promote the idea of “research based,” but who only look at the numbers that are assigned to the research, or have a lack of understanding in how the research was supposed to be used in the classroom.
For examples, we need to go no further than when Carol Dweck recently clarified the growth mindset (read Carol Dweck’s commentary on the topic here), and Howard Gardner had to dispel the myths around multiple intelligences. Additionally, it happens often with the Visible Learning work of John Hattie, who is a leading researcher in the field of education.
Visible Learning is about making the learning visible for students through the use of learning intentions and success criteria. It is founded on deep research around meta-analysis, which is a mathematical approach to combine many large studies, the veracity of which relies on effect sizes.
Over the years that I wrote the Finding Common Ground blog, I focused many posts on John Hattie and his Visible Learning research. For full disclosure, this is not meant to be a plug for Hattie’s work, nor for mine, because I have worked with him for the last four years. It’s just that I’ve come across a troubling trend when I speak to school and district leaders about his research. Leaders look at the influences with the highest effect sizes, and then dictate that teachers should use those, without understanding how the influence looks in action nor the research behind it.
The power of Hattie’s research is that he brings so many studies together and therefore can show how much of an impact specific influences can have on learning. However, that is where issues begin to arise, not for Hattie’s work, but rather how it is interpreted and then used by school and district leaders and teachers. And it serves as a cautionary tale.
To avoid the pitfalls of misusing research there are three areas that leaders should examine before they ever bring another “research-based” initiative to their schools again.
- Don’t focus too much on the numbers
In educational research, a .40 typically refers to a year’s worth of growth for a year’s input. Hattie refers to this .40 as the hinge point. All of his 251 influences have an effect size attached to them. Some of the influences are well above a .40 and many of them are well below a .40. There are leaders who believe that their teachers should only focus on influences that are above a .40, and some suggest that their teachers should only focus on those above a .60. After all, the larger the effect size, the bigger the impact. Unfortunately, this is flawed thinking, because it’s not always about the effect size, but how we use the influence. It is the story about how the influences overlap that matters and this is why Hattie called his model of overlap “Visible Learning.”
Educators need to look at more than just larger effect sizes.
- The Shiny New Toy isn’t for everyone…especially if you use it wrong
Too often leaders, whether they use Hattie’s research or not, chase after the shiny new toy, that everyone is talking about or they use—whatever new “research-based” solution is currently in vogue–because they assume it’s going to revolutionize their schools. Unfortunately, they implement the shiny new toy incorrectly, and it results in no revolutionary change. In order to implement anything properly, we must understand our current reality or that shiny new toy will never, ever work. We shouldn’t chase after shiny new toys because they look good on the shelf; we should chase after them because we need them, and know how to use them.
When it comes to Hattie’s research—or any other educational researcher for that matter—it’s about helping educators understand what works best for them at that moment in time, and making sure we all have a common understanding of how that particular influence should be implemented. After all, we may choose an intervention with a larger effect-size and implement it poorly.
- All show, and no go
Leaders tend to call out which influence might work best based on their gut feeling, but very few have read the research deeply to learn how those influences should work. Regardless of whether the influence has a large effect size or not, it’s important to understand what the research says about those particular influences. This, however, also means that leaders have to focus on implementing a small number of influences for an extended period of time as opposed to randomly throwing out into the conversation their favorite influence of the week.
In the End
If leaders are going to position themselves as the “lead learners” where they may dictate what works best, they should have an understanding of what they dictate before they dictate it.
Peter DeWitt is the author of several books including Collaborative Leadership: 6 Influences That Matter Most (Corwin Press), and the newly released School Climate: Leading With Collective Efficacy.
https://www.edweek.org/ew/articles/2015/09/23/carol-dweck-revisits-the-growth-mindset.html
Trudy / February 4, 2018
Hi Peter,
May be your expertise can help me? How careful do we need to be of the dilution of results gained from researching research? If we’re not sure of the original research that’s being analysed (eg. Who funded it? What conditions (class size, ability level, ESL, teacher expertise, teacher enthusiasm/burn out, time of year etc)) how much could be misinterpreted? I appreciate that the meta-analysis of Hattie is a massive contribution to the education field. Something needed to start somewhere. Training teachers to do their own research would also be extremely valuable and provide educators with their own evidence-based research to work from.
Look forward to hearing your thoughts ?
/
Peter DeWitt / February 5, 2018
Hi Trudy.
Great question.
I think we do know that information when we are looking at small individual studies because it might provide us with one of the biases of the research.
While I understand the criticisms of John’s research that critics have suggested (i.e. meta analysis are large studies, averaging of effect sizes, large effect sizes, etc), I appreciate the work John has done to put these meta-analysis together in a way that offers us a common theme. For example, in the studies around feedback he offers three levels of instructional feedback, and in over 500 studies on school leadership, he offers us a common theme among what constitutes instructional leadership.
I hope that helps answer your question.
/
Matt / November 25, 2017
Thanks for sharing! This is such an important post as I hear administrators and educators talk about research-based practices and how they should be visible in classrooms. Dweck and Hattie have become commonplace words in professional development sessions and many schools are looking at how to use the “new number one” strategy for growth mindset and visible learning. I don’t dismiss using strategies based on research, although I’m a little disturbed how often the phrase research-based is used and quietly question if the strategy being communicated is applicable in all situations.
/
Peter DeWitt / November 28, 2017
Thanks Matt. It’s definitely something I have had on my radar as I work with schools and hear from teachers and leaders. It’s easy to grab an influence because it has a high effect size, but John wants us to understand that his influences have many nuances. We should always understand our current reality before we grab the influences and bring them to teachers.
/