Last spring, we gave you the opportunity to submit your burning questions about the Visible Learning research or student achievement to Professor John Hattie, the researcher behind the largest evidence base on K-12 student achievement in the world. I was able to track him down to ask him those questions. This post is an excerpt from our conversation, focusing specifically on questions that dig deeper into effect sizes.
Amanda Boudria: Professor Hattie, thank you so much for talking with me today. I’m going to start with a question submitted by a team leader in Montreal that you probably get a lot. She asks, “Why does inquiry-based learning have an effect size of 0.31 when it is an approach to learning that seems to engage students and teachers so readily in the process of learning?”
John Hattie: We need to be careful when we say we like things because they engage kids in the process of learning. Getting kids to play video games will engage them in those things. Sometimes work is just hard and we have to recognize that and teach kids that it’s just hard. It is a very interesting question about why the effect size of inquiry-based learning is very, very small. And I spent quite a bit of time over the past few years trying to understand it because it sounds pretty impressive. It sounds like it’s the kind of thing we should be doing. But I am an evidence-based person. So I look at the evidence, see why the effect of inquiry- and problem-based learning is so low, and it’s only by asking that question that you can start to understand what is happening. It turns out that if you’re learning surface-level information, the content, as opposed to deep-learning, the relationship with the content, inquiry-based learning, is pretty useless. But if you don’t teach the surface-level information, you’ve got nothing to inquire about. So the major reason I would argue that inquiry-based learning is wrong, is that it’s introduced too early. It’s introduced before the students have the ideas. It’s introduced when some kids have enough knowledge to do the whole notion of inquiry- and problem-based but other kids are left behind. So one of the arts is to know when to introduce inquiry-based. If you’re trying to get the kids to build up some sufficient knowledge and understanding and vocabulary, that’s the wrong time. Once they have, it is the right time. So yes, it can work. The reason it comes out very low on the chart is because most teachers introduce it far too early.
AB: That makes sense. Next up, from an educational consultant also in Canada: Can you talk about variation in effect size for the principal in terms of the instructional leader and the transformational leader?
JH: There was a period in history where everything seemed to be good, and all the people said you just need to look at successful business people. Successful business people are transformative leaders. They go about transforming the system. Well, it turns out that we’re a different business. Students are our unit of analysis, not dollars. Don’t you wish it could be that easy? It’s not that easy. The leaders that really make a difference are what Viviane Robinson calls the “instructional leader”. I call it the “high-impact” leader. It doesn’t matter what you call it. It’s the leader who goes into the school and starts the conversation about what impact means.
Do teachers have a common understanding of what a year’s growth for a year’s input means? That’s a difficult conversation because teachers have very strong beliefs about those things. The difficulty in some schools is that they’re all over the place in terms of what they understand a year’s growth means in music, in history, in physics. A principal needs to be prepared to have that conversation, prepared to question the teachers, and prepared to say “let’s look at the evidence you use.” Great leaders are also constantly giving reinforcement. But more importantly, when teachers are seeing growth in all their students, they are great leaders. Find ways to esteem great leaders. And I think that’s what instructional leadership means. Transformation: not good enough. On top of transformational leadership, we need instructional leadership.
AB: This one is actually a question from our very own, Ainsley Rose! Several of his clients in the U.S. are interested in calculating the connection between effect size and percentile rating. Is there a correlation between the two, and if so, what is the means to compare them?
JH: Yes, that’s a very technical question so I’m going to give it a very technical answer. Yes, there is a way to do it. Unfortunately, you have to use something that is not easy and not obvious. But I’d express a lot of caution. Percentiles have some major problems. The whole point of percentiles is to compare students to other students. And yes, there is a place for that, I suppose, but what I’m much more interested in with the effect size notion is the learning for each individual student, and when you look at the averages of the effect size, you’ve in sense got a comparison. So yes, you can make a transformation between the percentile rankings and effect sizes. It’s not a straightforward one. It’s not an easy one. I’m not even sure it’s a worthwhile one. But yes, you can do it. Much more important I think is the beauty of effect sizes, given they are so simple to calculate, is that they start to give you a sense of the magnitude notion. So yes it can be done, but it is not the wisest thing to do…And thanks, Ainsley.
AB: This is a good one. A teacher in Chippewa, MI submitted asks: “How can we prioritize which influences to focus on during the school year?”
JH: Well, I think the question of prioritization is really critical. We need to be really careful that we understand our school before approaching this. One of the things we do in our work with schools is a needs assessment, but we have a fancy name now: we call it the Visible Learning Matrix. We go in to get a good sense of what’s working well so we can continue to do those things, and also what kinds of things we want to change. Having a collective understanding of the actual problems we need to work on is critical. But at all times, we need to be very careful not to make changes to things that are really working. So that’s where the prioritization starts. Then, schools and districts can map that to the high-probability effects from the research side of things so were not only doing things that have proven to work well in the school context but also using those high-probability effects for the interventions.
There is still an opportunity to hear from the man behind the research in his live webinar, “The Big Ideas Behind Visible Learning” on Feb 22. Let me know if you’d like us to reserve you a seat. What are your burning questions about the Visible Learning research? Leave us a comment!