While the Visible Learning research has been spread all over the world and has been transforming schools and the way we teach, we can’t be blind to the questions and criticisms that have also arisen from that spotlight.
I’d like to take this opportunity to address several of the criticisms we see more often below and, hopefully, clear the lens a bit as well as open to any other questions people might have. You can submit questions for me here. I am also writing a white paper, which will be released in the next few months, that provides much more detail about common criticisms of the Visible Learning research.
Criticism #1. You keep adding more meta-analyses and more influences.
More need to be added, because this is the nature of research—we continually question, query, replicate, and validate previous studies. It’s exciting to us that researchers are still finding fascinating influences to investigate and thus they can be added to the database.
Furthermore, I believe science progresses via falsifiability. If the next meta-analyses question the underlying Visible Learning model, I want to be the first to acknowledge this. So far, though, every meta-analysis added provides confirmation, not disconfirmation.
Criticism #2. The Visible Learning research includes old studies.
Yes, some of the studies are “historical”—i.e., they report past finding. That is what “re-search” means. The aim is to learn from what has occurred to better inform where to go, in the same way that checking your rear-view mirrors while driving helps you move forward safely.
To ignore the past permits the opinions, fads, beliefs, and desires that were once disproven to rise again. Our mission—educating students—demands more and, at minimum, we should not repeat our past errors. We need to be able to optimize the highest probability interventions. We also have a lot to learn about what has worked best in the past and endeavor to scale this up for the future.
Criticism #3. It’s wrong to focus solely on influences with the highest effect sizes and leave out all the lows.
Agreed. Some of the low effects may be critical, and remember that the research and the effect sizes are, first and foremost, a summary, so there may be exceptions.
Take Homework (0.29 ES) as an example. Meta-analysis has shown that the amount of homework a student does in primary school has no effect on student achievement or progress (.00 ES). This research isn’t saying that there should be no homework, but there is much opportunity to improve the effects of homework. If schools are going to give homework then the focus should be on how much and what type is given and whether it’s really of any use or just busy work.
Additionally, it is critical to ask why some effect sizes are so low. One of my interests is exploring some of these—open environments, class size, retention, and subject-matter knowledge in particular. For example, I have been considering why the effects are so low for class size, especially when it should be expected that reduced class size would allow more opportunities for introducing some of the higher effects (Hattie, 2010).
Just because an effect is not >0.40 does not mean it is not worthwhile. It means it may need deeper exploration.
Further, the .4 average is across all influences and may not be applicable to a local context. For example, if you are looking for narrower outcomes (e.g., vocabulary knowledge) it is likely the effect will be larger than if looking for a wider outcome (e.g., creativity); if you are looking at changes in numeracy in elementary school it is likely the effect will be larger than in high school; it is critical to use the right notion of “average effect-size” depending on context. Hence, the imperative to “Know thy Impact.”
Criticism #4. The research excludes qualitative or mix-methods studies that might support the use of some influences that are ranked lower (e.g. class sizes).
Yes, qualitative studies were not included because their findings can’t be quantified in a manner to be used in a meta-analysis.
However, one of the most exciting developments since Visible Learning was published is the emergence and growth of meta-synthesis of qualitative studies and I look forward to reading a synthesis of these studies that’s similar to the Visible Learning work.
Criticism #5. Visible Learning seems to ignore the debate about content or what subject matter is worth learning.
Visible Learning is not about the aims of education, nor a treatise of what is worth learning. I’ve written on these topics elsewhere—and they are critical topics—but the purpose of Visible Learning was specifically to understand what works best in supporting student achievement.
Criticism #6. Visible Learning is only focused on achievement, but this is not the only thing school is about.
I actually start my book, Visible Learning, by saying that, “of course, there are many outcomes of schooling, such as attitudes, physical outcomes, belongingness, respect, citizenship, and the love of learning. This book focuses on student achievement, and that is a limitation of this review” (p.6).
Others are now synthesizing effects relating to motivation, interest, and affect. We have also recently synthesized “How We Learn” (Hattie & Donoghue, 2016). I wish others would synthesize health and physical outcomes, and the importance of school being an inviting place to attract students to want to return to. I am delighted when this more rounded view of the many outcomes of schooling is reviewed. Achievement, though, remains central to the outcomes of schooling, and that’s why it was my focus.
Got questions for John Hattie about Visible Learning? Submit your question here and it might be answered in another blog post!