Hatties Defenses

In Hattie's three published defenses (20102015 & 2017), he never addressed the:

Specific examples of misrepresentation or the use of studies not measuring the influence in question.

Use of studies on non-school people, .e.g., doctors, tradesmen, military personnel, university students, etc. 

Use of general or specific populations of students such as those with learning disabilities, or of specific learning areas.

Many calculation errors (apart from the CLE).

Use of studies not measuring achievement but something else, e.g., behavior, engagement, IQ, etc.

The equal weight of meta-analyses involving from 4 to 4000 studies.

Major issues of range restriction and control groups which have been shown to significantly change effect size calculations. 

The problem of the age of the students and the time over which studies ran.

And many, many more...

Hattie's most recent defense is in an interview with Ollie Lovell in 2018 here. Hattie does try to answer the issues of range restriction and age of students but only on a superficial level.

Eacott (2018) comments about Hattie's defenses,
'Hattie did produce an antithesis to my original paper. His response made a counter argument to my claims. To do so, he did not need to engage with my ideas on anything but a superficial level (to think with Ryle, this is the wink all over – a focus on a few words without grasping their underlying generative meaning). There was no refutation of my claims. There was no testing of my ideas to destruction and no public accountability for his analysis. If anything, he simply dismissed my ideas with minimal reference to any evidence' (p6).
Lovell also posted a detailed review of Hattie's answers here and summarises:
'And so it was that I came to the conclusion that combining effect sizes from multiple studies, then using these aggregated effect sizes to try to determine ‘what works best’, equates to a category error. As I was reading back over this post, I asked myself the following, ‘Has an effect size ever helped me to be a better teacher?’ I honestly couldn’t think of an example that would enable me to answer ‘yes’ to this question. If you’re reading this, and you can answer this question with a ‘yes’ and an example,  please email me about it, I’m always open to having my mind changed.  But if for you, like me, the answer is ‘no’, then let’s agree to stop basing policy decisions, teacher professional development, or anything else in education upon comparisons of effect sizes. As both John and Adrian suggest, let’s focus on stories and mechanisms from now on.'
Prof's Snook, Clark, Harker, Anne-Marie O’Neill and John O’Neill respond to Hattie's 2010 defense in 'Critic and Conscience of Society: A Reply to John Hattie' (p97),
'In our view, John Hattie’s article has not satisfactorily addressed the concerns we raised about the use of meta-analyses to guide educational policy and practice.'
Prof Arne Kare Topphol responds to Hattie's defense,
'Hattie has now given a response to the criticism I made. What he writes in his comment makes me even more worried, rather than reassured.'
Darcy Moore posts,
'Hattie’s [2017] reply to Eacott’s paper does not even remotely grapple with the issues raised.'
Prof Eacott also responded to Hattie's defense,
'Disappointed that SLAM declined my offer to write a response to Hattie's reply to my paper. Dialogue & debate is not encouraged/supported.'
Eacott (2018) was able to publish a response in a different journal,
 'Disappointingly, Hattie's response was in my opinion, inadequate' (p4).
'given my argument for his work being Tayloristic (and supporting evidence), in what ways is his work beyond that of Taylor? Are there no commercial arrangements with ACEL? Is his work not highly influential in policy discussions despite questioning of the very foundations of his analysis? Is his name not deployed by politicians, systemic authorities and school leaders as an authority/authoritative source? If anything, what Hattie has inadvertently done is support my argument while attempting to refute it' (p6).
Professor Pierre-Jérôme Bergeron in his voicEd interview also talks about Hattie's conflict of interest and Hattie's reluctance to address the details of his critics. Listen here - at 17min 46sec.

Prof Dylan Wiliam casts significant doubt on Hattie's entire model by arguing that the age of the students and the time over which each study runs is an important component contributing to the effect size. 

Supporting Prof Wiliam's contention is the massive data collected to construct the United States Department of Education effect size benchmarks. These show a huge variation in effect sizes from younger to older students. 

This demonstrates that age is a HUGE confounding variable or moderator since, in order to compare effect sizes, studies need to control for the age of the students and the time over which the study ran. Otherwise, differences in effect size can be due to the age of the students measured!

Given Hattie's conclusion in his 2015 defense (p8),
'The main message remains, be cautious, interpret in light of the evidence, search for moderators, take care in developing stories, welcome critique, ...'
I'm extremely surprised Hattie has not addressed the massive implication of this evidence to his work, all he says in his summary VL 2012 (p14),
'the effects for each year were greater in younger and lower in older grades ... we may need to expect more from younger grades (d > 0.60) than for older grades (d > 0.30).'
Hattie finally agrees (2015 defense, p3) with Prof Wiliam:
'Yes, the time over which any intervention is conducted can matter (we find that calculations over less than 10-12 weeks can be unstable, the time is too short to engender change, and you end up doing too much assessment relative to teaching). These are critical moderators of the overall effect-sizes and any use of hinge = 0.4 should, of course, take these into account.'
Yet Hattie DOES NOT take this into account, there has been no attempt to detail and report the time over which the studies ran nor the age group of the students in the question nor adjust his previous rankings or conclusions.

Professor Dylan Wiliam summarises, 
'the effect sizes proposed by Hattie are, at least in the context of schooling, just plain wrong. Anyone who thinks they can generate an effect size on student learning in secondary schools above 0.5 is talking nonsense.'
The U.S Education Dept benchmark effect sizes support Wiliam's contention.

No comments:

Post a Comment