|3||Teaching||Providing formative evaluation||0.90|
|7||Teaching||Comprehensive interventions for learning disabled students||0.77|
|12||Teaching||Spaced vs. mass practice||0.71|
|16||Curricula||Repeated reading programs||0.67|
|21||Teacher||Not Labeling students||0.61|
|24||Teaching||Cooperative vs. individualistic learning||0.59|
|27||Curricula||Tactile stimulation programs||0.58|
|37||Teaching||Cooperative vs. competitive learning||0.54|
|38||Student||Pre-term birth weight||0.54|
|44||Teaching||Interactive video methods||0.52|
|47||Curricula||Second/third chance programs||0.50|
|48||School||Small group learning||0.49|
|56||Teacher||Quality of Teaching||0.44|
|61||Teaching||Behavioral organizers/Adjunct questions||0.41|
|62||Teaching||Matching style of learning||0.41|
|65||Curricula||Social skills programs||0.40|
|67||Curricula||Integrated Curriculum Programs||0.39|
|70||Teaching||Time on Task||0.38|
|71||Teaching||Computer assisted instruction||0.37|
|74||School||Principals/ school leaders||0.36|
|75||Student||Attitude to mathematics/science||0.36|
|76||Curricula||Exposure to reading||0.36|
|79||Teaching||Frequent/effects of testing||0.34|
|80||School||Decreasing disruptive behavior||0.34|
|84||Student||Positive view of own ethnicity||0.32|
|86||Teaching||Inquiry based teaching||0.31|
|87||School||Ability grouping for gifted Students||0.30|
|93||Curricula||Use of calculators||0.27|
|94||Curricula||Values/moral education programs||0.24|
|96||Teaching||Special college programs||0.24|
|97||Teaching||Competitive vs. individualistic learning||0.24|
|102||Student||Lack of Illness||0.23|
|103||Teaching||Teaching test taking||0.22|
|105||Teaching||Comprehensive teaching reforms||0.22|
|116||School||Within class grouping||0.16|
|119||Curricula||Sentence combining programs||0.15|
|125||Teacher||Teacher subject matter knowledge||0.09|
|127||School||Out of school curricula experiences||0.09|
|130||School||College halls of residence||0.05|
|132||Teaching||Student control over learning||0.04|
|133||School||Open vs. traditional||0.01|
Note: in Hattie’s 2017 publication, Learning strategies: a synthesis and conceptual model - he argued against ranking (p. 9)!
"There is much debate about the optimal strategies of learning, and indeed we identified >400 terms used to describe these strategies. Our initial aim was to rank the various strategies in terms of their effectiveness but this soon was abandoned. There was too much variability in the effectiveness of most strategies depending on when they were used during the learning process."Hattie confirms this in his 2018 podcast with Ollie Lovell. Hattie said,
"it worked then it got misleading so I stopped it"!!!
Hattie's new way of ranking with Corwin (March 2019):
Other Peer Reviews on Hattie's Rankings:
'Hattie's ranking suggests that the 138 influencing factors are to be seen as basically alternative options for action, from which the most effective one is to be selected, for example, during lesson design. However, this would require that the influencing factors are in the first place real options for action and that their effect sizes were determined in relation to the same "baseline", ie. that the control conditions in the primary studies in all meta-analyzes included in the first step are to be regarded as an equivalent benchmark. A look at the ranking shows that this is neither the case nor is it possible at all, since the influencing factors belong to completely different types: On the one hand, one finds not only institutional framework conditions such as school size. As well as personal traits such as the self-concept. Such as those classified under "direct instruction". Effect sizes usually relate to a comparison with some form of "traditional" instruction...
It is doubtful that teachers would ever usefully choose between direct instruction instruction and distributed practice rather than the optimal combination of both to decide. If this is the case, the assessment of the relative effectiveness of these options cannot be based in one case on a comparison with an instructional "standard condition" and on the other hand on a comparison with another design variant of the same instruction form.'
Prof Adrian Simpson is similarly critical of rankings in his detailed analysis, 'The misdirection of public policy: comparing and combining standardised effect sizes' (p. 451),
'The numerical summaries used to develop the toolkit (or the alternative ‘barometer of influences’: Hattie 2009) are not a measure of educational impact because larger numbers produced from this process are not indicative of larger educational impact.
Instead, areas which rank highly in Marzano (1998), Hattie (2009) and Higgins et al. (2013) are those in which researchers can design more sensitive experiments.
As such, using these ranked meta-meta-analyses to drive educational policy is misguided.'Schulmeister & Loviscach (2014) Errors in John Hattie’s “Visible Learning”.
'If one corrects the errors mentioned above, list positions take big leaps up or down.Even more concerning is the absurd precision this ranking conveys. It only shows the averages of effect sizes but not their considerable variation within every group formed by Hattie and even more so within every individual meta-analysis.'
'To think that didactics can be presented as a clear ranking order of effect sizes. It is a dangerous illusion. To an extreme degree, the effect of a specific intervention depends on the circumstances. Focusing on the mean effect sizes and ignoring their considerable variations and condensing the data to a seeming exact ranking order, Hattie pulls the wool over his audience’s eyes.'Dr. Jim Thornton Professor of Obstetrics and Gynaecology at Nottingham University said,
'To a medical researcher, it seems bonkers that Hattie combines all studies of the same intervention into a single effect size. Why should “sitting in rows”, for example, have the same effect on primary children as on university students, on maths as on art teaching, on behaviour outcomes as on knowledge outcomes? In medicine it would be like combining trials of steroids to treat rheumatoid arthritis, effective, with trials of steroids to treat pneumonia, harmful, and concluding that steroids have no effect! I keep expecting someone to tell me I’ve misread Hattie.'Claes Nilholm (2013) in It's time to critically review John Hattie also warns of these rankings in the Swedish context. Nilholm uses 2 detailed examples to draw his conclusion (see problem-based learning and effect size),
'... Hattie draws far too far-reaching conclusions... he presents summative measurements in tabular form, and factors are ranked according to their significance. Unfortunately, SKL (Swedish Municipalities and County Council) ,puts great emphasis on these summing measures. I would like to say that the summaries, if used to guide teachers' work, can give utter incorrect implications' (p. 3).McKnight & Whitburn (2018) in Seven reasons to question the hegemony of Visible Learning using Biesta,
'Teachers will pick and choose from the list of “what works” that forms Visible Learning, even as their guts tell them externally mandated, evidence-based practices will not necessarily work for them (Biesta, 2007)' (p. 16).