The Visible Learning research concluded that feedback has a powerful effect on student learning. During his recent webinar, Professor Hattie asked attendees to send in their questions related to feedback. Read his first post answering your questions here.
What role does self-assessment/self-evaluation have in giving/receiving feedback?
Self-assessment entails a LOT more than just guessing outcomes – it can be taught, it can be powerful, and we have a whole book on this topic (Developing Assessment-Capable Visible Learners, Grades K-12). Teaching students how to evaluate their progress, how close they are towards success criteria, and what to do next requires deliberate teaching.
Do you recommend phrasing feedback in a specific way from teacher to student, student to student, or through the self-reflection process?
The key is to listen to what the student understands from your feedback – there is no one right way to give, but powerful ways to check understanding of receiving feedback. Self-reflection is great for some students, but if they do not “know” or misunderstand the feedback, then self-reflection can be a disaster as it reinforces not knowing, the wrong information, or is frustrating.
How can we motivate those students where learning is not important to them?
The best way to motivate is to ensure we know the student, know their reactions to challenge, devise success criteria that is “not too hard, not too boring,” push the personal best notions (see Andrew Martin’s work on this), ensure safety nets if they do not know or struggle (especially in front of peers), and reward the progress to success. Find out what challenges turn them on now, and see if this can help engineer working towards what you want them to learn. The power of peers is powerful for these students too (for good and bad), so work on increasing the collective efficacy of these less-motivated students.
I am curious to know your take on formative vs summative on counting into GPA or not…
Tricky, as in one sense if the formative information works then the test on which the formative is derived is not very meaningful (as now the students can do the task, and the purpose of many formative interventions is to teach!). I see both as powerful when used at the right time.
How would you give the students feedback at their level, and still give them enough feedback to prove proficient in a topic for a final assessment?
By making sure that proficiency in the final assessment values the ideas and the relation between ideas – is there an opportunity for students to learn from your feedback in the final assessment? You need to be aware of the level of cognitive complexity desired in the final assessment and help the students at this (these) level(s).
Would single-point rubrics be more effective if they were two-point rubrics, one point being for content and one being with how that information was communicated/discovered/etc.?
We have moved to two success criteria – one about the content and one about the relation between the ideas. I can see multiple rows in a rubric to make the same point. As we learn more, we are starting to see value in being more explicit – if for no other reason that it tells teachers that both (content and relations) matter, to create tasks where they both matter, and to demonstrate to students there is a reason to learn content (to relate and extend the ideas).
Is there a specific structure that should be followed in giving students feedback to maximize its effect?
Sorry, I need a book length to answer this one – hence the recent book.
Would you class errors and trust as welcomed opportunities as part of a growth mindset? These have a high effect size, whereas I think I saw growth mindset vs. fixed mindset had a low effect size. Thank you for any comments.
The reason why growth mindset has a lower effect is that it is too often considered a “generic” attribute, whereas the skill is knowing when to be in the growth mindset and when it matters less. When you are in a situation of challenge, error, and misconceptions, then the growth rather than fixed mindset can make the difference – but you need to know to have these mindsets at those moments – generic programs of growth mindset hardly make a difference simply because they are “generic.” James Nottingham’s new book Challenging Mindset goes into greater depth in explaining when a growth mindset makes a difference.
Does it make sense for me to track my own impact on student learning by using effect size? I only have 120 students, so is this statistically appropriate?
Does the prevalence of task-level feedback indicate students need the background knowledge?
Yes, but the question we need to ask is, “When is this knowledge sufficient, so we can move onto relating and extending the knowledge?” We often start and stop with knowledge and students believe (sometimes for good, but mainly worse) that what we value most is knowing lots. Moreover, many above average students like pursuing knowledge – they know this game, it is safe, and they are not bad at it. Too much task feedback suggests we are NOT moving the students +1 to the next level.
Got questions for John Hattie about Visible Learning? Submit your question here and it might be answered in another blog post!