Summary
This is a critical perspective on John Hattie's Visible Learning (VL), emphasizing the need for thorough critique and public accountability in educational research. The aim is to raise awareness of detailed peer reviews that highlight major errors in Hattie's work.
The critiques encompass issues such as the categorization of studies into "influences," inclusion of irrelevant studies, methodological flaws, calculation errors, conflicts of interest, and Hattie's removal of studies from his analyses. Numerous peer reviews from various sources challenge the validity and reliability of Hattie's claims, pointing out misleading practices, statistical errors, and concerns about the overall quality of the research.
The blog also references educators and researchers who have questioned Hattie's methodology, defended their own critical perspectives, and raised awareness about potential consequences for students and educators.
The page concludes by highlighting Hattie's financial conflict of interest and calling for its consideration.
Overall, the blog seeks to foster a critical examination of Hattie's work and encourages a more rigorous and transparent approach to educational research.
Peer Reviews of Hattie's Visible Learning (VL).
"Our discipline needs to be saturated with critique of ideas; and it should be welcomed. Every paradigm or set of conjectures should be tested to destruction and its authors, adherents, and users of the ideas should face public accountability." (Hattie, 2017, p. 428).The peer reviews are saturated with detailed critiques of Hattie's work but most educators do not seem to be aware of them.
My Aim
is to raise awareness of these critiques and investigate Hattie's claims in the spirit of Tom Bennett the founder of researchEd,
Davis (2018) - "...if the method is still researched but as flexibly interpretable, then teachers can take little from any effect size ‘proved’ by people such as Hattie."
Slavin, Robert (2020) - "the value of a category of educational programs cannot be determined by its average effects on achievement. Rather, the value of the category should depend on the effectiveness of its best, replicated, and replicable examples."Gorard et al. (2020) - "School decision‐makers around the world have been increasingly influenced by hyper‐analyses of prior evidence which synthesise the results of many meta‐analyses-such as those by Hattie (2008), described on its cover as revealing 'teaching’s Holy Grail', and similar attempts around the world. These are even more problematic because again they are combining very different kinds of studies, taking no account of their quality, or of the quality of the studies making up each meta‐analysis. Commentators are now realising and warning of their dangers"
Simpson (2021) - "despite Cohen’s nomenclature, 'effect size' does not measure the size of an effect as needed for policy... Choice of sample, comparison treatment and measure can impact ES; at the extreme, educationally trivial interventions can have infinite ES..."
"There exists a good deal of poor, misleading or simply deceptive research in the ecosystem of school debate...
Where research contradicts the prevailing experiential wisdom of the practitioner, that needs to be accounted for, to the detriment of neither but for the ultimate benefit of the student or educator." (Bennett, 2016, p. 9).
The pages (right) reference over 50 peer reviews which detail a litany of major errors in VL.
The Peer Review
Perhaps the simplest and most profound critique of Hattie's work is his categorization of studies into what he calls "influences". The peer review has shown that Hattie includes studies that are not relevant to the category in question, e.g., lead class size researcher Peter Blatchford,
Perhaps the simplest and most profound critique of Hattie's work is his categorization of studies into what he calls "influences". The peer review has shown that Hattie includes studies that are not relevant to the category in question, e.g., lead class size researcher Peter Blatchford,
"it is odd that so much weight is attached to studies that don't directly address the topic on which the conclusions are made" (Blatchford, 2016b, p. 13).
Hattie responded to the earlier critiques on this,
'...claims that the studies were not appraised for their validity are misleading and incorrect. One of the very powers of meta-analysis is to deal with this issue. Readers and policy makers can have assurance that the conclusions I made are based on "studies, the merits of which have been investigated"'. (Hattie, 2010, p. 88)
Yet, recently in, Wisniewski, Zierer & Hattie (2020), "The Power of Feedback Revisited", Hattie removed most of the original 23 studies he used for Feedback. He seems to have done the same for many other of his influences, e.g. Teacher Training/Education, where he has recently removed ALL of the original studies he cited.
This is extremely disappointing as Hattie has made significant claims regarding these earlier cited studies, e.g., in the case of Teacher Training/Education,
"Teacher Education is the most bankrupt institution I know." (Hattie, 2011, Melbourne Graduate School address @ 22 mins)
Related is Hattie's claim that he faithfully represents the research. I provide a detailed example using Class Size to show in many cases he does not.
There are many other significant issues with Hattie's work ranging from flawed methodology, calculation errors, and conflicts of interest, e.g.,
Snook, Clark, Harker, O’Neill & O’Neill (2009) - "Hattie says that he is not concerned with the quality of the research in the 800 studies but, of course, quality is everything. Any meta-analysis that does not exclude poor or inadequate studies is misleading, and potentially damaging if it leads to ill-advised policy developments."
Terhardt (2011) - is suspicious of Hattie's economic interests.
Terhardt (2011) - is suspicious of Hattie's economic interests.
Topphol (2011) - "...the mistake is pervasive, systematic and so clear that it should be easy to reveal in the publishing process. It has not been... this suggests failure in quality assurance. Is this symptomatic of what is coming from this author..? I don't hope so, but I can't be sure."
Berk (2011) - "Statistical malpractice disguised as statistical razzle-dazzle."
Higgins & Simpson (2011) - "the process by which this number (effect size) has been derived has rendered it effectively meaningless." & Hattie has mixed-up the X/Y axis on his Funnel plot graph.
Berk (2011) - "Statistical malpractice disguised as statistical razzle-dazzle."
Higgins & Simpson (2011) - "the process by which this number (effect size) has been derived has rendered it effectively meaningless." & Hattie has mixed-up the X/Y axis on his Funnel plot graph.
O'Neill (2012) - Hattie is a Policy Entrepreneur, he positions himself politically to champion, shape and benefit from school reform discourses.
Schulmeister & Loviscach (2014) - "Hattie pulls the wool over his audience’s eyes." & "Hattie’s method to compute the standard error of the averaged effect size as the mean of the individual standard errors ‒ if these are known at all ‒ is statistical nonsense."
Lind (2013) - "Hattie synthesis are shortsighted and its conclusions problematic."
Poulsen (2014) - "Do I believe in Hattie's results? No!"
Wrigley (2015) - "Bullying by Numbers."
O'Neill, Duffy & Fernando (2016) - Detail the huge undisclosed 3rd party payments to Hattie.
Wecker et al. (2016) - "A large proportion of the findings are subject to reasonable doubt."
Bergeron & Rivard (2017) - "To believe Hattie is to have a blind spot in one’s critical thinking when assessing scientific rigour. To promote his work is to unfortunately fall into the promotion of pseudoscience. Finally, to persist in defending Hattie after becoming aware of the serious critique of his methodology constitutes willful blindness."
Nilholm (2017) - "Hattie's analyzes need to be redone from the ground up."
Nielsen & Klitmøller (2017) - "Neither consistent nor systematic."
Shannahan (2017) - "potentially misleading."
See (2017) - "Lives may be damaged and opportunities lost."
Biesta (2017) - "more akin to pig farming than science."
Proulx (2017) - Hattie's collection of feedback studies are not consistent with Hattie's definition of feedback.
Wrigley (2015) - "Bullying by Numbers."
O'Neill, Duffy & Fernando (2016) - Detail the huge undisclosed 3rd party payments to Hattie.
Wecker et al. (2016) - "A large proportion of the findings are subject to reasonable doubt."
Bergeron & Rivard (2017) - "To believe Hattie is to have a blind spot in one’s critical thinking when assessing scientific rigour. To promote his work is to unfortunately fall into the promotion of pseudoscience. Finally, to persist in defending Hattie after becoming aware of the serious critique of his methodology constitutes willful blindness."
Nilholm (2017) - "Hattie's analyzes need to be redone from the ground up."
Nielsen & Klitmøller (2017) - "Neither consistent nor systematic."
Shannahan (2017) - "potentially misleading."
See (2017) - "Lives may be damaged and opportunities lost."
Biesta (2017) - "more akin to pig farming than science."
Proulx (2017) - Hattie's collection of feedback studies are not consistent with Hattie's definition of feedback.
Proulx (2017) - Hattie claims, when teachers see learning through the eyes of the student is at the heart of the concept Visible Learning. But, that statement, found at the beginning of the book and which few can oppose, has no support in his research data. In a word, no meta-analysis focuses on this dimension.
Davis (2018) - "...if the method is still researched but as flexibly interpretable, then teachers can take little from any effect size ‘proved’ by people such as Hattie."
Eacott (2018) - "A cult...a tragedy for Australian School Leadership."
Slavin (2018) - "Hattie is wrong."
McKnight & Whitburn (2018) - "The Visible Learning cult is not about teachers and students, but the Visible Learning brand."
Ashman (2018b) - "If true randomised controlled trials can generate misleading effect sizes like this, then what monsters wait under the bed of the meta-meta-analysis conducted by Hattie and the EEF?"
Janson (2018) - "little value can be attached to his findings."
Larsen (2019) - "Blindness."
Wiliam (2019) - "Has absolutely no role in educational policy making."
Wiliam (2019b) - "Meta-meta-analyses, the kinds of things that Hattie & Marzano have done, I think have ZERO educational value!"
Simpson (2011, 2017, 2018, 2019) - "using these ranked meta-meta-analyses to drive educational policy is misguided."
Bakker et al. (2019) - "his lists of effect sizes ignore these points and are therefore misleading."
Zhao, Yong (2019) - "Hattie is the king of the misuse of effect sizes."
Slavin (2018) - "Hattie is wrong."
McKnight & Whitburn (2018) - "The Visible Learning cult is not about teachers and students, but the Visible Learning brand."
Ashman (2018b) - "If true randomised controlled trials can generate misleading effect sizes like this, then what monsters wait under the bed of the meta-meta-analysis conducted by Hattie and the EEF?"
Janson (2018) - "little value can be attached to his findings."
Larsen (2019) - "Blindness."
Wiliam (2019) - "Has absolutely no role in educational policy making."
Wiliam (2019b) - "Meta-meta-analyses, the kinds of things that Hattie & Marzano have done, I think have ZERO educational value!"
Simpson (2011, 2017, 2018, 2019) - "using these ranked meta-meta-analyses to drive educational policy is misguided."
Bakker et al. (2019) - "his lists of effect sizes ignore these points and are therefore misleading."
Zhao, Yong (2019) - "Hattie is the king of the misuse of effect sizes."
Slavin, Robert (2020) - "the value of a category of educational programs cannot be determined by its average effects on achievement. Rather, the value of the category should depend on the effectiveness of its best, replicated, and replicable examples."
Kraft (2020) - "Effect sizes that are equal in magnitude are rarely equal in importance."
Larsen & Hattie (2020) - "what I think is really misleading, and in the worst case wrong, science, if you reduce a complex phenomenon to a simplistic explanation and a colorful and seductive image."
Wiliam (2020) - "There is no reason to trust any of the numbers in Visible Learning."
Wolf et al. (2020) - Effect sizes conducted by a program's developers are 80% larger than those done by independent evaluators (0.31 vs 0.14) with ~66% of the difference attributable to publication bias.
Slavin (2020b) - "the overall mean impacts reported by meta-analyses in education depend on how stringent the inclusion standards were, not how effective the interventions truly were."
Wolf et al. (2020) - Effect sizes conducted by a program's developers are 80% larger than those done by independent evaluators (0.31 vs 0.14) with ~66% of the difference attributable to publication bias.
Slavin (2020b) - "the overall mean impacts reported by meta-analyses in education depend on how stringent the inclusion standards were, not how effective the interventions truly were."
O'Connor (2020) - Investigates Whole Language and shows Hattie's bias in regard to the studies he includes and excludes. "Hopefully further scrutiny of Hattie’s work will lead to a renewed recognition of the importance of a wide research base in literacy and other fields of education, including in-depth ethnographic, qualitative and interpretive studies."
Wiliam (2021) - "we can discuss why those numbers in John Hattie’s Visible learning are just nonsense".
Nielsen & Klitmøller (2021) - "by analyzing parts of the primary research and the meta-analysis upon which Hattie grounds his conclusions, we find both serious methodological challenges and validity problems."
Sundar & Agarwal (2021) - "there are several statistical concerns with his calculation methods. We urge teachers to recognize that Hattie’s scores can not be equated to what a majority of the research community calculates and interprets as effect sizes."
Kraft (2021) - "It is much easier to produce large improvements in teachers' self-efficacy than in the achievement of their students. In my view, this renders universal effect size benchmarks impractical."
Armstrong & Armstrong (2021). "...these claims often do not stand up to closer scrutiny and are intellectually oversimplified or grossly politicised accounts of ‘what works’. When used in this way, EBP itself becomes ethically compromised..."Ashman (2022). "I no longer accept the validity of Hattie’s methods."
OECD (2022). 'Research on “What works in what works” has become a vibrant field of study in recent years but it has not, as yet, yielded enough robust evidence. The systematic investigation and evaluation of existing efforts to reinforce research impact are critical to improving such efforts. Yet, such evaluations to date have been scarce.'
Johnson & Janzen (2023). "Visible Learning is a dubious mishmash of research of unknown quality, statistical juggling, and the author’s self-assured opinion."
Thomas Aastrup Rømer (2018) received the prestigious Nordic Educational Research Association, Ahlström Award (2019). For "Criticism of John Hattie's theory of Visible Learning". The Association states,
"...the paper makes a precise and subtle critique of Hattie‘s work, hence revealing several weaknesses in the methods and theoretical frameworks used by Hattie. Rømer and his critical contribution inform us that we should never take educational theories for granted; rather, educational theories should always be made subject to further research and debate."Hattie's Claims in VL
It is important to understand Hattie's claims in Visible Learning - details here.
Hattie's Alternate Narrative - "The Story"
Hattie often switches to a different narrative, "what's the story, not what's the numbers".
This Blog
The pages on the right detail the studies Hattie included in each influence, e.g., class size, as well as the technical details of his methods, e.g., ES calculation.
A comparison of his claims with other reputable evidence organizations - here. An interesting question is, why are the claims from different organizations so different and often contradictory?
Hattie's financial conflict of interest is significant and needs to be addressed here.
A summary of Hattie's defenses are here.
I suggest you read the book 'Make It Stick' which gives a number of research-based strategies for helping students remember what they have previously learned.
ReplyDeletethanks for the tip, i will try to find the book
ReplyDeletewow excellent analysis
ReplyDeleteThanks this is really informative, I'm amazed at all the mistakes Hattie makes.
ReplyDeleteI was more than happy to discover this website.
ReplyDeleteI want to to thank you for ones time for this particularly wonderful read!!
I definitely appreciated every little bit of it and
i also have you book marked to see new information on your web site.
thank you for your kind words, I hope more teachers take the time to read as you have done.
DeleteGreat beat ! I wouⅼd likе to apprentice аt tһe same timе ɑs you
ReplyDeleteamend yoᥙr site, hօw could i subscribe fߋr а weblog site?
Thee account helped mе a applicable deal.
Ι ѡere a littlе bit familiar ᧐f this yоur broadcast рrovided shiny cleawr concept
thank you, you can have your own blog through google's 'blogger' - https://www.blogger.com or through Wordpress - https://wordpress.com/create-blog/.
DeleteIf you want to amend my blog email me - george.lilley15@gmail.com what you think should change and i will consider it.
Thank you for your efforts George.
ReplyDeleteNIce
ReplyDeleteQuran Academy Online
Life Insurance Australia
Excellent review. I will be sharing your analysis with my colleagues as Hattie's work is gospel in some schools. I fear it has become yet another educational 'fad'.
ReplyDeleteAmazing issues here. I'm very satisfied to see your post.
ReplyDeleteThanks a lot and I am having a look ahead to contact you.
Will you kindly drop me a mail?
Loving the info on this site, you have done
ReplyDeleteoutstanding job on the content.
Is anyone aware of any efforts to replicate any of Hattie’s findings?
ReplyDeleteHattie has not conducted this research himself. He has collected other studies called meta-analyses, which are in themselves collections of multiple individual studies. So Hattie's method is called meta-meta-analysis. Replication occurs at the individual study level, not the meta-analysis or meta-meta-analyses level. There is are sort of replication by the Education Endowment Foundation, who use this meta-meta-analysis method. They get similar effect sizes for feedback and meta-cognition but over 250 of Hattie's influences do not make their list of influences. Marzano works mainly at the meta-analysis level. The largest evidence organisation, The What Work Clearing House, focuses on research at the individual study level (not the meta-analysis) and prioritise replication, but they get totally different results to Hattie. See my page on OTHER RESEARCHERS on the menu on the right.
Deletebrilliant analysis, thank you
ReplyDeleteGeorge Lilley, your blog matches my own research on John Hattie for the purposes of checking him out prior to a visit to Shepparton to promote the Greater Shepparton Secondary College mega school which is an absolute disaster. You probably have accessed many of these links:
ReplyDeletehttps://networkonnet.wordpress.com/2018/04/15/hatties-research-is-false-vice-chancellors-asked-to-investigate-part-1/
https://eductechalogy.org/2018/07/07/hatties-effect-size-a-pseudoscience-or-critics-just-being-critics/
https://robertslavinsblog.wordpress.com/2018/06/21/john-hattie-is-wrong/
https://en.wikipedia.org/wiki/Robert_Slavin
Some of the article links you asked for:
https://www.wsws.org/en/articles/2021/04/26/inte-a26.html?fbclid=IwAR2fI8Vpep_nulnVhfHKtQoZpz_mx0ElTiIwNbnZNIkHjUp0QuJtERgwTWE
https://www.wsws.org/en/articles/2021/05/29/shep-m29.html?fbclid=IwAR2LdL-0gdrubGradXoKMSRfuqkHFJXmjuDLCXZtQhy0qsjzKvn2Op7TDiM
https://www.wsws.org/en/articles/2021/03/06/shep-m06.html?fbclid=IwAR0XEUVE222M-yGC_YR_vbsfSe0jL0uMma1PI8uMwsT8d6pXLpH-mepKOjw
https://www.wsws.org/en/articles/2021/03/06/inte-m06.html?fbclid=IwAR0lSHpZGfqDA-tCGp90wvPx8EgltjXLUQV1v2w0fNqHFb5iT9AWZ8sAX3w
https://www.wsws.org/en/articles/2021/05/17/work-m17.html?fbclid=IwAR2dYje8SIszwIr9gxBPMxHFv1DXrjYosDHne6BdMWik_zcYaCgV1hwdxpo
https://www.wendylovell.com.au/media-releases/lovell-andrews-labor-government-hiding-super-school-risks-from-parents-and-students/?fbclid=IwAR0fziOiRhY3-6Z5JKzTLzPqWx6WmXEWo-5l8tjcGEy1tQuYWqSL1O98pu0
I don't have links to the Herald Sun articles but this is an overview:
https://www.facebook.com/groups/686594418467388/search/?q=Herald%20Sun&sde=Abq6dw1mInGg0iJgUwaNKXZLXknY30vgyM1WmZZTGAl-hIBU50AQrpPq04qqIvt4YDFpYo7ar_0FoXHci3tFTEX86CijSHhShrG5ndBsrO6fTQo9VfDE3xJvfJWVhM_HDG0PTMuU7uqrGOBoreQqnZqw3oz_rQQ_HPF2vgu-74zsG4xoOSR8PaSFFrqOjLPfSXA
Keep up the good work, our children deserve better.
Colleen Sara.
Thanks Colleen, that's a great phrase to promote "our children deserve better"
DeleteOne more I think I missed:
ReplyDeletehttps://www.wsws.org/en/articles/2021/04/26/inte-a26.html?fbclid=IwAR0lOCZQ0tBURi0AbEKZCdhOu5o4siEZtCdy_1yk_3IS4qNavJo5XNTZlTc
There needs to be a royal commission into education. This bloke (Hattie) holds such sway over education in n Qld. One can only conclude that it is a result of incompetence or corruption
ReplyDeleteThe same is true in Victoria. Teachers are not trained in these methods so there is very little scrutiny and analysis. As a result, education is susceptible to pseudo science and snake oil. The Australian Education does not do enough to scrutinise Hattie's methods, nor publish or promote the large number of academics who do.
DeleteThank you so much for this clear analysis.
ReplyDeleteJohn Hattie stands for statistical spaghetti.
DeleteYES can't argue with that
ReplyDelete