CONTACT US:
Tuesday / March 19

John Hattie Answers Your Feedback Questions Part 1

Professor John Hattie’s landmark Visible Learning research concluded that effective feedback, combined with effective instruction, improves the rate of learning by a factor of 2! During his webinar a few weeks ago, John Hattie explained how to distinguish between effective and ineffective feedback, and shed light on crucial topics such as who, when, where, and how to direct feedback to and from students to increase learning.

If you were there, you’ll remember him asking attendees to send in their questions about feedback. Here are his promised responses in full Professor Hattie fashion!

Don’t see your question here? Keep an eye out for Part 2 in the coming weeks.

Will you share your insights about co-creating success criteria with teachers, in a similar fashion as co-creating success criteria with students?

In my new book Visible Learning Feedback with Shirley Clarke, we talk about constructing two forms of success criteria for each lesson (surface/ deep or rules/tools) and note the importance of co-creating success criteria with students:

  • Students become more independent;
  • students have more ownership over their learning and ongoing evaluation;
  • there is higher achievement when students have seen good examples and can follow or choose from the success criteria they have generated;
  • older students can teach younger students more effectively;
  • higher achievers can teach lower achievers more effectively; and
  • teachers have greater assurance that the students understand the criteria.

And we discuss 11 strategies used by teachers to co-construct criteria (more details in the book):

  1. Showing excellent and different examples of the same skill either in written form or finished product and asking, ‘What features can you identify in these examples?’
  2. Demonstrating a technique or skill (possible projected if for instance, drawing the stages of a line graph) stopping after each step and asking, ‘What did I just do?’
  3. Demonstrating good and bad/Showing good and bad examples of old student work
  4. Doing it wrong
  5. Showing a wrong example
  6. Working through it
  7. Retrospective co-construction
  8. Incomplete surprise letter or invitation
  9. Jigsaw the pieces
  10. Reordering given success criteria after practical experience
  11. Eavesdropping talk partners (good when you think students will probably already know something about this, such as the elements of a newspaper article)

What are strategies teachers can use to build student capacity for receiving feedback?

Teach them to listen, ask them what they understood by the feedback, and ask them “In light of what I have provided you, what would you best do next?” You can also ask them to write a brief action plan in light of the feedback – focusing on how they understand your feedback. If they do not understand your feedback, then change how you give it.

We have an implementation study of two school districts we are doing. Do you have some survey items we could use to coordinate data collection?

Yes, www.visiblelearningplus.com has suggestions for various tools – starting with a needs analyses across the school to get a diagnosis. That is the first necessary step to improvement (too many interventions are not based on diagnosis and then we wonder why they have no impact!). Other tools include how to collect achievement data over time, mindsets, and so on. There are also plenty of free instruments, but of course, quality counts here.

How do we ensure that all students are receiving feedback? What would you suggest that fosters dialogue?

Shut up and listen! Create situations for genuine classroom discussion, hear students ask and answer questions, check out the Paideia methods, listen listen listen.

I would like to know if you are using the effect size more with groups or individual results of our students?

You need groups to obtain the necessary statistics, and then you can look at individuals – but care is needed as they should be more “comforting” than validating – that is, they should help confirm or question your interpretative judgements and not replace them.

Very important to follow-up with the feedback provided to students. If you don’t follow-up, what was the point?

Yes indeed.

Does it matter if the feedback you provide is 2-3 hours after a lesson e.g. (video/audio feedback on a task after school by teacher)? Is delayed feedback still better than none?

Yes, there can be many instances when delayed feedback is more powerful than immediate. Sometimes in the hustle and bustle of the moment we do not “hear” feedback or focus on the wrong part of the feedback information, so delay can sort some of this out.

So much of these ideas are in direct alignment with Montessori Philosophy. In an authentic Montessori Environment, I have seen these strategies, and students thrive. I wish all teachers learned teaching with the philosophies she taught as, based on the strategies you suggest, they are implicit in authentic Montessori teaching. I wonder why her methods are not discussed in much of the educational research. I would love to study it more!

Yes, I see many parallels, but then wonder why there is so little research on the Montessori impact. I gave a keynote to an Australian Montessori conference and asked this question but nothing much has happened – the religious-like claims get in the way of convincability for some of us.

Can you also include the link for the other webinar John mentioned?

You can watch the recording of Jenni Donohoo’s webinar on Fostering Collective Teacher Efficacy here. Also, be on the lookout via the Visible Learning Twitter or www.corwin.com/webinars for my next webinar after the first of the year.

What do you think about Feeding Forward in flexible groups, i.e., grouping those with common errors and giving them the same “feed forward” activity? (Note: We have large class sizes, 40+ students.)

There is little evidence that grouping by ability makes much difference, and there are many great reasons to never group this way. So yes, you can use flexible grouping, but the key is the nature of the task you assign (I am a great fan of the jigsaw method).

Just FYI – the “nature of feedback” slide – the Singapore column added up to way more than 100%. Can that be fixed before it’s sent out? It’d make it easier to understand the overall point – thank you!

It was task 59%, process 25%, regulation 2%, self 14% = 100%

One of the studies I would love to complete is about how relationships between students and teachers impact the variability of how feedback is received. Are any such studies happening?

Yes, there are many such studies – the relationship builds the trust for the error and sometimes negative (I do not mean punishing, but negative) feedback to be heard. We are more likely to hear disconfirming evidence from people we trust who will not demean us or judge us, and who will help us. Relationships (with and between students) is the bank to be built to use in these circumstances.

Got questions for John Hattie about Visible Learning? Submit your question here and it might be answered in another blog post!

Visible Learning books

Written by

Professor John Hattie is an award-winning education researcher and best-selling author with nearly 30 years of experience examining what works best in student learning and achievement. His research, better known as Visible Learning, is a culmination of nearly 30 years synthesizing more than 1,500 meta-analyses comprising more than 90,000 studies involving over 300 million students around the world. He has presented and keynoted in over 350 international conferences and has received numerous recognitions for his contributions to education. His notable publications include Visible LearningVisible Learning for Teachers, Visible Learning and the Science of How We Learn, Visible Learning for Mathematics, Grades K-12, and, most recently, 10 Mindframes for Visible Learning.

No comments

leave a comment