A significant challenge we came across as School Voice consultants related to the issue of student teamwork and collaboration. Educators recognize that having students work together on various projects has many benefits. Collaboration is, after all, a so called twenty-first century skill (“so called” because collaboration was important in all centuries prior to the 21st… just ask the NASA Apollo teams, Lincoln’s cabinet, the framers of the Constitution, the French Impressionists… you get the idea). In addition, students repeatedly say there is more Fun & Excitement when they work in teams. Nationally 62% of students say they enjoy working on projects with other student, and focus group data indicates any lack of enjoyment stems from feeling that grading when teamed is not always fair. Generally engagement goes up, motivation increases, and boredom diminishes when students are paired, work in groups, teamed, and so on. With so much upside, why so great a challenge?
Because, as the focus group data alludes, when students collaborate, grading winds up an attempt to fit a square team peg into the round hole of individualized grading schemes. Teams work together, but grades must be recorded individually. A group’s work can be assessed as a group, but at the end of the term each student receives an individual report card. Even back in the 20th century when I was in high school, there were always students who felt they did more than their share of the lifting when pairing or teamwork was involved. The team of three each got an “A” based on my one student’s effort (ahem). Hardly fair. Not to mention an inaccurate assessment of what each student was capable of. Teachers I have talked to an eighth of the way into this century say the problem persists. I am happy to report I have solved it! Well…maybe. And likely a veteran teacher has already figured this out and I am simply too pleased with my solution to Google it in the fear of finding out I am late to the party.
Students in my class had to work in teams of two, three, or four to create a slide deck presentation of a chapter from their summer reading assignment. We developed a rubric that we used to assess each presentation. The assignment was worth a total of 50 points. I explained that half of each person on the team’s grade—25 points—would be an average of a self-assessment, average peer-assessment, and my assessment based on the rubric. Let’s call that their Team Score.
I then explained that they had an additional 24 points (easier to divide than 25) to apportion among the individuals on the team as they saw fit based on individual performance. If there were two of them and they felt they did an equal amount of work, they should assign 12 points each. For a team of three, that would result in 8 points each. If they felt like someone did more work than the others, they could vary the distribution of those 24 points. On a team of four, a student who did most of the work might get 10 points, another 6, and the remaining two each 4. I told them I trusted them to do what was fair and that if anyone had an issue with how points in their group were divided, they were free to come see me. Now here is where it gets tricky.
I calculated a student’s Individual Score as a function of the team score multiplied by the assigned portion divided by whatever a perfectly even distribution would be. For example, if a team of three did a perfect presentation (25 points) and they agreed they evenly shared the work (8/8, 8/8, and 8/8), each individual was entitled to a full share of the team’s 25, so each person would have a final score of 50. If another team of three decided one person did most of the work (16/8) and the other two only did a little of the work (4/8 and 4/8), the student who did the bulk of the work was entitled to all 25 of the team’s points (final score=50), but the other two students were only entitled to four-eighths (or one-half) of the 25, so each would receive a final score of 38 (25 plus 13, rounding up). This probably sounds more complicated than it is, but a few simple formulae in an Excel spread sheet made the calculations instantaneous.
Fairness among the students is essential to make this work as there is no harm to a student who does all the work agreeing to an even apportioning. An individual score cannot exceed the team score. Part of the lesson here is the exercise of fairness on a team. The safety valve is the option to see me. I explained that all of this was a way of translating into our grading system what they experience all the time whenever people collaborate. One player may strike out every at bat, but still be on a winning team. His individual stats will reflect that even though his team gets a W. You might go to a concert and the group is playing beautifully, until one person plays the wrong note. Everyone hears it and (inwardly) groans, but the band plays on. Someone appears in a movie and wins best actor, even though the movie wins no awards. Or vice versa. In these situations a coach or another player may exhort or encourage more practice or a better effort next time, but most of the real world works through a combination of individual contributions to an overall team effort. QISA works that way, the new faculty I am on works that way, businesses, sports, the entertainment industry, politics (when it is working) all operate on co-operation. The challenge in education is that our individualized metrics do not reflect the real world.
As this approach is entirely experimental for me, I welcome any and all feedback.
Mickey / October 11, 2016
Great point, Simon.
/
Simon Feasey / October 8, 2016
Having team members apportion credit for learning outcomes is extremely challenging for all those involved, Mickey. There need not be an equal apportioning of skills and knowledge amongst a given group. As there may not be in any given professional learning network (PLN) or professional learning community (PLC), What matters is the outcome and everyone’s embracing and acceptance of that, and then how that is applied and advanced. Recognising contributions is one thing. Valuing all contributions and accepting that weighting of contributions is likely to vary given the challenge and context is just as important.
/