When wanting to know how much and how well our children are learning in school, critical measures of success are usually based solely on achievement data. Results from standardized tests are examined along with results from previous years (and neighboring schools, districts, and states/provinces) in order to track and compare levels of proficiency for different groups of students. Recently at the Council of Chief State School Officers (CCSSO) Summit, Professor John Hattie encouraged the audience to consider not only proficiency but progress as well – pointing out that achievement and growth are two different measures.
When considering magnitude of success, an effect size is one measure that educators can use to identify patterns and trends based on progress. Educators can use effect sizes in addition to achievement data to gain a better perspective of their progress toward school improvement. While achievement data provides information about how well students are achieving, effect sizes provide information about how much progress students are making. As Professor Hattie noted, progress leads to achievement.
Using effect sizes as an additional (and different) measure of success is not only eye opening, it can be very empowering for teachers. Teachers can actually know the impact they are having on individual students throughout the course of a year. They can use this information to inquire about their practice in an effort to improve student achievement. They can also use the information to strive to increase the rate in which students are progressing. An effect size of 0.40 is typical of one year’s growth. The larger the effect size, the greater the gains.
Recently, I was able to take the ideas of Visible Learning Plus presented by John Hattie, Deb Masters (Cognition Learning), and Peter DeWitt (Corwin Visible Learning Trainer) at the CCSSO Summit and apply them in practice. Two secondary mathematics teachers with whom I work were interested in knowing more about how an effect size could be used to improve the progress and achievement of their 9th grade students. The students in their classes were learning to solve problems using proportional reasoning. At the end of the unit, we calculated class effect size as well as the effect size for individual students based on the averages from students’ pre and post-tests in both classes. Teachers had administered the same test, based on curriculum expectations, at the beginning of February and again in mid May.
What did we find? The class effect size equaled 0.52 in one class and 0.27 in the other. Overall progress made by students in one class almost doubled that of the students in the other. Upon comparing the effect sizes of individual students (see example below), the data showed that some students made gains equating to more than a year’s progress, some showed less than a year’s progress, and in a few cases, student progress actually decreased.
The above example raised important questions for the mathematics teachers. Why did students 19 and 22 make such high gains while students 20, 26, 27, and 29 did not? Students 28 and 29 both had the same average on the pre-test and both missed 11 classes throughout the unit. Why did student 28 realize greater gains than student 29? Upon examining the individual effect sizes in comparison to student absences, the teachers began to re-consider long-standing beliefs they held in regard to the relationship between achievement and absenteeism.
As Hattie (2012) pointed out “Using effect sizes invites teachers to think about using assessment to help to estimate progress, and to reframe instruction to better tailor learning for individual, or groups of, students. It asks teachers to consider reasons why some students have progressed and others not – as a consequence of their teaching.” The data presented above helped to initiate deep conversations about teaching and learning amongst the two math teachers and their colleagues. As Hattie noted at the CCSSO Summit, “Every child deserves at least a year’s growth for a year’s input.” Based on effect sizes, as a measure of success, the math teachers are rethinking how they can adjust their instruction as they endeavor to improve outcomes for all of their students.
You don’t have to be a mathematician to learn how to calculate effect sizes. Hattie provides an explanation along with examples in Visible Learning for Teachers: Maximizing Impact on Learning.
As a Visible Learning Trainer, it was my privilege to attend the CCSSO Summit on behalf of Corwin. Working with John Hattie and Deb Masters (Cognition Education), along with Peter DeWitt and Raymond Smith (also Visible Learning Trainers), was an extremely valuable learning experience.
Note: Professor John Hattie continues to add to his synthesis of meta-analyses. Visible Learning for Teachers: Maximizing Impact on Learning, contains over 900+ meta-analyses that examine various factors impacting student achievement. Hattie’s research, the largest of its kind, uses effect sizes as a means to compare contributions from the student, home, school, teacher, curricula, and teaching approaches. What is demonstrated through Hattie’s research is that almost everything has some impact on student achievement. Given the fact that so many factors impact student achievement, the question “What works best?” is an important one for educators to consider. Fortunately, Hattie’s synthesis can point us in the right direction as it serves as a comprehensive reference that no educator should be without, and it provides the answer to that important question.
Professor Hattie will share more about his research and the use of effect sizes at the Visible Learning Institute this summer. For more information on the International Visible Learning Institute, please click here.
Hattie, J. (2012). Visible Learning for Teachers: Maximizing Impact on Learning. Routledge. New York, NY.