CONTACT US:
Sunday / December 22

Clearing the Lens: Addressing the Criticisms of the VISIBLE LEARNING Research

While the Visible Learning research has been spread all over the world and has been transforming schools and the way we teach, we can’t be blind to the questions and criticisms that have also arisen from that spotlight.

I’d like to take this opportunity to address several of the criticisms we see more often below and, hopefully, clear the lens a bit as well as open to any other questions people might have. You can submit questions for me here. I am also writing a white paper, which will be released in the next few months, that provides much more detail about common criticisms of the Visible Learning research.

Criticism #1. You keep adding more meta-analyses and more influences.

More need to be added, because this is the nature of research—we continually question, query, replicate, and validate previous studies. It’s exciting to us that researchers are still finding fascinating influences to investigate and thus they can be added to the database.

Furthermore, I believe science progresses via falsifiability. If the next meta-analyses question the underlying Visible Learning model, I want to be the first to acknowledge this. So far, though, every meta-analysis added provides confirmation, not disconfirmation.

Criticism #2. The Visible Learning research includes old studies.

Yes, some of the studies are “historical”—i.e., they report past finding. That is what “re-search” means. The aim is to learn from what has occurred to better inform where to go, in the same way that checking your rear-view mirrors while driving helps you move forward safely.

To ignore the past permits the opinions, fads, beliefs, and desires that were once disproven to rise again. Our mission—educating students—demands more and, at minimum, we should not repeat our past errors. We need to be able to optimize the highest probability interventions. We also have a lot to learn about what has worked best in the past and endeavor to scale this up for the future.

Criticism #3. It’s wrong to focus solely on influences with the highest effect sizes and leave out all the lows.

Agreed. Some of the low effects may be critical, and remember that the research and the effect sizes are, first and foremost, a summary, so there may be exceptions.

Take Homework (0.29 ES) as an example. Meta-analysis has shown that the amount of homework a student does in primary school has no effect on student achievement or progress (.00 ES). This research isn’t saying that there should be no homework, but there is much opportunity to improve the effects of homework. If schools are going to give homework then the focus should be on how much and what type is given and whether it’s really of any use or just busy work.

Additionally, it is critical to ask why some effect sizes are so low. One of my interests is exploring some of these—open environments, class size, retention, and subject-matter knowledge in particular. For example, I have been considering why the effects are so low for class size, especially when it should be expected that reduced class size would allow more opportunities for introducing some of the higher effects (Hattie, 2010).

Just because an effect is not >0.40 does not mean it is not worthwhile. It means it may need deeper exploration.

Further, the .4 average is across all influences and may not be applicable to a local context. For example, if you are looking for narrower outcomes (e.g., vocabulary knowledge) it is likely the effect will be larger than if looking for a wider outcome (e.g., creativity); if you are looking at changes in numeracy in elementary school it is likely the effect will be larger than in high school; it is critical to use the right notion of “average effect-size” depending on context. Hence, the imperative to “Know thy Impact.”

Criticism #4. The research excludes qualitative or mix-methods studies that might support the use of some influences that are ranked lower (e.g. class sizes).

Yes, qualitative studies were not included because their findings can’t be quantified in a manner to be used in a meta-analysis.

However, one of the most exciting developments since Visible Learning was published is the emergence and growth of meta-synthesis of qualitative studies and I look forward to reading a synthesis of these studies that’s similar to the Visible Learning work.

Criticism #5. Visible Learning seems to ignore the debate about content or what subject matter is worth learning.

Visible Learning is not about the aims of education, nor a treatise of what is worth learning. I’ve written on these topics elsewhere—and they are critical topics—but the purpose of Visible Learning was specifically to understand what works best in supporting student achievement.

Criticism #6. Visible Learning is only focused on achievement, but this is not the only thing school is about.

I actually start my book, Visible Learning, by saying that, “of course, there are many outcomes of schooling, such as attitudes, physical outcomes, belongingness, respect, citizenship, and the love of learning. This book focuses on student achievement, and that is a limitation of this review” (p.6).

Others are now synthesizing effects relating to motivation, interest, and affect. We have also recently synthesized “How We Learn” (Hattie & Donoghue, 2016). I wish others would synthesize health and physical outcomes, and the importance of school being an inviting place to attract students to want to return to. I am delighted when this more rounded view of the many outcomes of schooling is reviewed. Achievement, though, remains central to the outcomes of schooling, and that’s why it was my focus.


Got questions for John Hattie about Visible Learning? Submit your question here and it might be answered in another blog post!

Written by

Dr. John Hattie has been Professor of Education and Director of the Melbourne Education Research Institute at the University of Melbourne, Australia, since March 2011. He was previously Professor of Education at the University of Auckland. His research interests are based on applying measurement models to education problems. He is president of the International Test Commission, served as advisor to various Ministers, chaired the NZ performance based research fund, and in the last Queens Birthday awards was made “Order of Merit for New Zealand” for services to education. He is a cricket umpire and coach, enjoys being a Dad to his young men, besotted with his dogs, and moved with his wife as she attained a promotion to Melbourne. Learn more about his research at www.corwin.com/visiblelearning.

Latest comments

  • Something that may be of use in updating the research on class size is to study the studies for bias, context and reliability. As a teacher, I have on the ground experience that tells me that class size can impact teaching and learning in negative ways. If push comes to shove, my experience is more valuable to me as a teacher because I see it and I live it over and over again than any research that tells me I’m wrong about class size. This is not to say that I’m going to toss my hands up in the air, say that it’s because I have a large class, and just give up. Absolutely not! However, I do know, again from experiencing large, medium and small class sizes, that class size does matter. Furthermore, a complete reliance on quantitative studies really does not provide a whole picture of “what works” in education.

    • Hi Elisa, We asked John Hattie, and he provided the following:
      I agree that all appearances and reactions are that smaller classes make a difference – although the research is reasonably systematic that these differences while positive are very small. And relative to spending our educational resources on one of the more expensive interventions we might be wise to invest elsewhere (provided we are given this option). There is evidence that a major reason why the effects are small is that teachers do not change how they teach to optimize the opportunities of small class size (see below for references). Also class size is often a proxy – for parents it is a proxy as they believe smaller classes mean more individual attention to their child (but there is no evidence this happens). For principals it is a proxy for staff-student ratios as this is a major factor in most school funding models. For teachers, it is a proxy for removing the most disruptive students – and these disruptive students do take much time, attention and can detract from other student learning. Let me ask you – would you rather than 40 students who want to be in class, or 20 who do not? Would you rather me come and remove randomly 10-12 students, or you pick the 5 you do not want in class. Yes, there is more to the world that quantitative studies but they can make us think, question our assumptions, and should be part of the story.

      I have explored these ideas at:
      Hattie, J.A.C. (2016). The right question in the debates about class size: Why is the (positive) effect so small? Blatchford, P., Chan, K.W., Galton, M., Lai, K.C., & Lee, J.C.L. (Eds)., Class Size: Eastern and Western Perspectives. (pp. 105-118). London: Routledge.
      Hattie, J.A.C. (2007). The paradox of reducing class size and improved learning outcomes. International Journal of Education, 42, 387-425.

leave a comment