Monday 18 January 2016

Summary


This is a critical perspective on John Hattie's Visible Learning (VL), emphasizing the need for thorough critique and public accountability in educational research. The aim is to raise awareness of detailed peer reviews that highlight major errors in Hattie's work. 

The critiques encompass issues such as the categorization of studies into "influences," inclusion of irrelevant studies, methodological flaws, calculation errors, conflicts of interest, and Hattie's removal of studies from his analyses. Numerous peer reviews from various sources challenge the validity and reliability of Hattie's claims, pointing out misleading practices, statistical errors, and concerns about the overall quality of the research. 

The blog also references educators and researchers who have questioned Hattie's methodology, defended their own critical perspectives, and raised awareness about potential consequences for students and educators. 

The page concludes by highlighting Hattie's financial conflict of interest and calling for its consideration. 

Overall, the blog seeks to foster a critical examination of Hattie's work and encourages a more rigorous and transparent approach to educational research.

Peer Reviews of Hattie's Visible Learning (VL).

"Our discipline needs to be saturated with critique of ideas; and it should be welcomed. Every paradigm or set of conjectures should be tested to destruction and its authors, adherents, and users of the ideas should face public accountability." (Hattie, 2017, p. 428).
The peer reviews are saturated with detailed critiques of Hattie's work but most educators do not seem to be aware of them.

My Aim

is to raise awareness of these critiques and investigate Hattie's claims in the spirit of Tom Bennett the founder of researchEd,
"There exists a good deal of poor, misleading or simply deceptive research in the ecosystem of school debate...
Where research contradicts the prevailing experiential wisdom of the practitioner, that needs to be accounted for, to the detriment of neither but for the ultimate benefit of the student or educator." (Bennett, 2016, p. 9).
The pages (right) reference over 50 peer reviews which detail a litany of major errors in VL.

The Peer Review

Perhaps the simplest and most profound critique of Hattie's work is his categorization of studies into what he calls "influences". The peer review has shown that Hattie includes studies that are not relevant to the category in question, e.g., lead class size researcher Peter Blatchford,
"it is odd that so much weight is attached to studies that don't directly address the topic on which the conclusions are made" (Blatchford, 2016b, p. 13).
Hattie responded to the earlier critiques on this,
'...claims that the studies were not appraised for their validity are misleading and incorrect. One of the very powers of meta-analysis is to deal with this issue. Readers and policy makers can have assurance that the conclusions I made are based on "studies, the merits of which have been investigated"'. (Hattie, 2010, p. 88)
Yet, recently in, Wisniewski, Zierer & Hattie (2020), "The Power of Feedback Revisited", Hattie removed most of the original 23 studies he used for Feedback. He seems to have done the same for many other of his influences, e.g. Teacher Training/Education, where he has recently removed ALL of the original studies he cited.

This is extremely disappointing as Hattie has made significant claims regarding these earlier cited studies, e.g., in the case of Teacher Training/Education, 
"Teacher Education is the most bankrupt institution I know." (Hattie, 2011, Melbourne Graduate School address @ 22 mins)
Related is Hattie's claim that he faithfully represents the research. I provide a detailed example using Class Size to show in many cases he does not.

There are many other significant issues with Hattie's work ranging from flawed methodology, calculation errors, and conflicts of interest, e.g.,

Snook, Clark, Harker, O’Neill & O’Neill (2009) - "Hattie says that he is not concerned with the quality of the research in the 800 studies but, of course, quality is everything. Any meta-analysis that does not exclude poor or inadequate studies is misleading, and potentially damaging if it leads to ill-advised policy developments."
Terhardt (2011) - is suspicious of Hattie's economic interests.
Topphol (2011) - "...the mistake is pervasive, systematic and so clear that it should be easy to reveal in the publishing process. It has not been... this suggests failure in quality assurance. Is this symptomatic of what is coming from this author..? I don't hope so, but I can't be sure."
Berk (2011) - "Statistical malpractice disguised as statistical razzle-dazzle."
Higgins & Simpson (2011) - "the process by which this number (effect size) has been derived has rendered it effectively meaningless." & Hattie has mixed-up the X/Y axis on his Funnel plot graph.

O'Neill (2012) - Hattie is a Policy Entrepreneur, he positions himself politically to champion, shape and benefit from school reform discourses.
Lind (2013) - "Hattie synthesis are shortsighted and its conclusions problematic."

Schulmeister & Loviscach (2014) - "Hattie pulls the wool over his audience’s eyes." & "Hattie’s method to compute the standard error of the averaged effect size as the mean of the individual standard errors ‒ if these are known at all ‒ is statistical nonsense."
Poulsen (2014) - "Do I believe in Hattie's results? No!"
Wrigley (2015) - "Bullying by Numbers."
O'Neill, Duffy & Fernando (2016) - Detail the huge undisclosed 3rd party payments to Hattie.
Wecker et al. (2016) - "A large proportion of the findings are subject to reasonable doubt."

Bergeron & Rivard (2017) - "To believe Hattie is to have a blind spot in one’s critical thinking when assessing scientific rigour. To promote his work is to unfortunately fall into the promotion of pseudoscience. Finally, to persist in defending Hattie after becoming aware of the serious critique of his methodology constitutes willful blindness."
Nilholm (2017) - "Hattie's analyzes need to be redone from the ground up."
Nielsen & Klitmøller (2017) - "Neither consistent nor systematic."
Shannahan (2017) - "potentially misleading."
See (2017) - "Lives may be damaged and opportunities lost."
Biesta (2017) - "more akin to pig farming than science."
Proulx (2017) - Hattie's collection of feedback studies are not consistent with Hattie's definition of feedback.
Proulx (2017) - Hattie claims, when teachers see learning through the eyes of the student is at the heart of the concept Visible Learning. But, that statement, found at the beginning of the book and which few can oppose, has no support in his research data. In a word, no meta-analysis focuses on this dimension.

Davis (2018) - "...if the method is still researched but as flexibly interpretable, then teachers can take little from any effect size ‘proved’ by people such as Hattie."
Eacott (2018) - "A cult...a tragedy for Australian School Leadership."
Slavin (2018) - "Hattie is wrong."
McKnight & Whitburn (2018) - "The Visible Learning cult is not about teachers and students, but the Visible Learning brand."
Ashman (2018b) - "If true randomised controlled trials can generate misleading effect sizes like this, then what monsters wait under the bed of the meta-meta-analysis conducted by Hattie and the EEF?"
Janson (2018) - "little value can be attached to his findings."

Larsen (2019) - "Blindness."
Wiliam (2019) - "Has absolutely no role in educational policy making." 
Wiliam (2019b) - "Meta-meta-analyses, the kinds of things that Hattie & Marzano have done, I think have ZERO educational value!"
Simpson (2011, 2017, 2018, 2019) - "using these ranked meta-meta-analyses to drive educational policy is misguided."
Bakker et al. (2019) - "his lists of effect sizes ignore these points and are therefore misleading."
Zhao, Yong (2019) - "Hattie is the king of the misuse of effect sizes."

Slavin, Robert (2020) - "the value of a category of educational programs cannot be determined by its average effects on achievement. Rather, the value of the category should depend on the effectiveness of its best, replicated, and replicable examples."
Gorard et al. (2020) - "School decision‐makers around the world have been increasingly influenced by hyper‐analyses of prior evidence which synthesise the results of many meta‐analyses-such as those by Hattie (2008), described on its cover as revealing 'teaching’s Holy Grail', and similar attempts around the world. These are even more problematic because again they are combining very different kinds of studies, taking no account of their quality, or of the quality of the studies making up each meta‐analysis. Commentators are now realising and warning of their dangers"
Kraft (2020) - "Effect sizes that are equal in magnitude are rarely equal in importance."
Larsen & Hattie (2020) - "what I think is really misleading, and in the worst case wrong, science, if you reduce a complex phenomenon to a simplistic explanation and a colorful and seductive image."
Wiliam (2020) - "There is no reason to trust any of the numbers in Visible Learning."
Wolf et al. (2020) - Effect sizes conducted by a program's developers are 80% larger than those done by independent evaluators (0.31 vs 0.14) with ~66% of the difference attributable to publication bias.

Slavin (2020b) - "the overall mean impacts reported by meta-analyses in education depend on how stringent the inclusion standards were, not how effective the interventions truly were."
O'Connor (2020) - Investigates Whole Language and shows Hattie's bias in regard to the studies he includes and excludes. "Hopefully further scrutiny of Hattie’s work will lead to a renewed recognition of the importance of a wide research base in literacy and other fields of education, including in-depth ethnographic, qualitative and interpretive studies."

Simpson (2021) - "despite Cohen’s nomenclature, 'effect size' does not measure the size of an effect as needed for policy... Choice of sample, comparison treatment and measure can impact ES; at the extreme, educationally trivial interventions can have infinite ES..."
Wiliam (2021) - "we can discuss why those numbers in John Hattie’s Visible learning are just nonsense".
Nielsen & Klitmøller (2021) - "by analyzing parts of the primary research and the meta-analysis upon which Hattie grounds his conclusions, we find both serious methodological challenges and validity problems."
Sundar & Agarwal (2021) - "there are several statistical concerns with his calculation methods. We urge teachers to recognize that Hattie’s scores can not be equated to what a majority of the research community calculates and interprets as effect sizes."
Kraft (2021) - "It is much easier to produce large improvements in teachers' self-efficacy than in the achievement of their students. In my view, this renders universal effect size benchmarks impractical."
Armstrong & Armstrong (2021). "...these claims often do not stand up to closer scrutiny and are intellectually oversimplified or grossly politicised accounts of ‘what works’. When used in this way, EBP itself becomes ethically compromised..."

Ashman (2022). "I no longer accept the validity of Hattie’s methods."
OECD (2022). 'Research on “What works in what works” has become a vibrant field of study in recent years but it has not, as yet, yielded enough robust evidence. The systematic investigation and evaluation of existing efforts to reinforce research impact are critical to improving such efforts. Yet, such evaluations to date have been scarce.'

Johnson & Janzen (2023). "Visible Learning is a dubious mishmash of research of unknown quality, statistical juggling, and the author’s self-assured opinion."

Thomas Aastrup Rømer (2018) received the prestigious Nordic Educational Research Association, Ahlström Award (2019). For "Criticism of John Hattie's theory of Visible Learning". The Association states,
"...the paper makes a precise and subtle critique of Hattie‘s work, hence revealing several weaknesses in the methods and theoretical frameworks used by Hattie. Rømer and his critical contribution inform us that we should never take educational theories for granted; rather, educational theories should always be made subject to further research and debate."
Hattie's Claims in VL

It is important to understand Hattie's claims in Visible Learning - details here.

Hattie's Alternate Narrative - "The Story"

Hattie often switches to a different narrative, "what's the story, not what's the numbers".

This Blog

The pages on the right detail the studies Hattie included in each influence, e.g., class size, as well as the technical details of his methods, e.g., ES calculation.

A comparison of his claims with other reputable evidence organizations - here. An interesting question is, why are the claims from different organizations so different and often contradictory?

Hattie's financial conflict of interest is significant and needs to be addressed here.

A summary of Hattie's defenses are here.