A Year's Progress ?

Hattie asserts that the average Effect Size (d) across his compiled studies is 0.40. He then equates this figure with One Year's Academic Growth.
"The d = 0.40 is what I referred to in Visible Learning as the hinge-point (or h-point) for identifying what is and what is not effective." (Hattie, 2012, p.3)
"d = 0.4 is what we can expect as growth per year on average." (Hattie, 2012, p. 14) & (Hattie presentation Melbourne Graduate School, 2011, @21minutes).
See Hattie's Claims for more detail about each of his other major claims.

How Reliable are these Claims?

These claims are all contingent on the statistic of the Effect Size (details about how it is calculated here) suffice it to say there a huge issues!

David Didau provides an amusing but relevant discussion on The Unit Of Education and Hattie's Effect Sizes, using the classic movie Spinal Tap and the amplifier that goes to 11, not 10.

The Education Endowment Foundation (EEF) is one of the few organizations that have used this same unique method of Hattie - the META meta-analysis. However, they deduce a very different scale with an effect size of d = 0.1 equivalent to 1 month of progress. So Hattie's 0.4 would be 4 months of progress - not one year!

This difference is very concerning since it breaks one of the foundations of the Scientific Method, Reliability- the degree of consistency of a measure. A test will be reliable when it gives the same repeated result under the same conditions.

More concerning is Hattie's claim,
"I would go further and claim that those students who do not achieve at least a 0.40 improvement in a year are going backwards..." (VL, p. 250).
In terms of teacher assessment, he takes this one step further by declaring teachers who don't attain up to an effect size of 0.40 are "below averageHattie (2010, p. 86).

These claims are a major concern for several reasons, the Reliability issues as stated above and also Hattie's financial interest in a Teacher Assessment software program called e-asTTLE and performance pay.

Although, he did backtrack in his summary VL 2012 publication regarding the meaning of d = 0.40,
"I did not say that we use this hinge point for making decisions, but rather we used it to start discussions" (p. 14).
Yet, as of July 2024, on Hattie's commercial website:
"The average effect size was 0.4, a marker that represented a year’s growth per year of schooling for a student. Anything above 0.4 would have a greater positive effect on student learning."
Yet, also on this web site in his publication "Real Gold vs, Fool's Gold" Hattie appears to backtrack from these claims about an effect size (d) of 0.40,
"But we must not get too oversold on using d = 0.40 in all circumstances. The interpretation can differ in light of how narrow (e.g., vocabulary) or wide (e.g., comprehension) the outcome is, the cost of the intervention (Simpson, 2017) ...When implementing the Visible Learning model, it is worth developing local knowledge about what works best in the context and not overly rely on 0.40. The 0.40 merely is the average of all 1,600 meta-analyses..." (p. 14)
Peer Reviews Question Hattie's claim that d = 0.40 is Equivalent to 1 Year's Progress

Thibault (2017) in, "Is John Hattie's Visible Learning So Visible?" states that (translation to English),
"...regarding the order of magnitude of the effect, Hattie (2009) gives little detail. He explains summarily, an effect of d = 0.4 is roughly equivalent to the progression of a student in one year when an effect of d = 1 is roughly equivalent to the progression of 2 or 3 school years. 
Though this assertion is debatable as mentioned by Proulx (2017), the lack of detail in Hattie also raises questions about these associations between magnitude of effect and duration. 
For example, ... is an effect of d = 0.2 equivalent to the progression of a pupil over a half-year school?
Also, are the effects accounted for in the meta-analyses over the same duration and, where appropriate, how to compare the effect of an intervention of a few weeks or months versus an intervention on a full year? 
These questions, for me, remain vibrant on reading all these averages and effects."
Wecker et al. (2017) also question the notion that an effect of 0.4 corresponds to a year's progress:
"An observed effect size of, for example, 0.3 would obviously hardly correspond to the magnitude of the increase in competence over the course of a school year." (p. 33)
Kraft (2021) in his discussion with Hattie,
"I just think the 0.40 threshold is the wrong one. Of the almost 2,000 effect sizes I analyzed from RCTs examining the effects of educational interventions on standardized student achievement, only 13 percent were 0.40 or larger...
I would argue that the hinge point of 0.40 sets up education leaders to have unrealistic expectations about what is possible and what is meaningful. The scale of data you draw on using thousands of meta-analyses is incredibly impressive. But it also means that this average effect size of 0.40 pools across studies of widely variable quality.

Correlational studies with misleadingly strong associations likely dominate the data... Publication bias, where lots of studies of ineffectual programs are never written or published, further strengthens my suspicion that the 0.40 hinge point is too large."
THE NEED FOR BENCHMARK EFFECT SIZES:

Cohen (1988) hesitantly defined effect sizes as "small, d = 0.2", "medium, d = 0.5", and "large, d = 0.8", stating that "there is a certain risk in inherent in offering conventional operational definitions for those terms for use in power analysis in as diverse a field of inquiry as behavioral science" (p. 25).

Bloom et al. (2007) argue that,
"there is no universal guideline or rule of thumb for judging the practical importance or substantive significance of a standardized effect size estimate for an intervention. Instead one must develop empirical benchmarks of comparison that (see table of US benchmarks below) reflect the nature of the intervention being evaluated, its target population, and the outcome measure or measures being used. We apply this approach to the assessment of effect size measures for educational interventions designed to improve student academic achievement." (abstract)
Lipsey et al. (2012) state,
"Cohen’s broad categories of small, medium and large are clearly not tailored to the effects of intervention studies in education, much less any specific domain of education interventions, outcomes, and samples. Using those categories to characterize effect sizes from education studies, therefore, can be quite misleading. It is rather like characterizing a child’s height as small, medium, or large, not by reference to the distribution of values for children of similar age and gender, but by reference to a distribution for all vertebrate mammals." (p. 12)
The United States Department of Education has commissioned a more detailed study of the effect size benchmarks, for K-12, using the national testing across the USA of around 50 Million students (table of results below from Lipsey et al. (2012, p. 28):


Hattie acknowledged these results in his summary VL 2012 (p. 14), but uses them to justify his "hinge point", d = 0.40, and says, 
"the effects for each year were greater in younger and lower in older grades ... we may need to expect more from younger grades (d > 0.60) than for older grades (d > 0.30)."
Hattie's modest adjustment fails to account for the substantial age-related differences in educational outcomes. Furthermore, his analysis overlooks the inclusion of adult learners, such as university students and established professionals (e.g., medical practitioners), in many of his meta-analyses. This oversight affects areas like 'self-report grades', 'problem-based learning', and 'worked examples'.

The substantial age-related variation acts as a critical moderator in Hattie's methodology of effect size comparison. Differences observed between two educational influences might be primarily attributable to the age of the students being studied, rather than the interventions themselves.

Steiner (2021) also uses Lipsey et al. (2012) then emphasizes the need to know the age of the students in the study,
"...few people I have spoken with outside the research community understand that an effect size, all on its own, doesn’t tell you whether you should be impressed by an educational intervention. You also have to know the intended grade-level. For instance, an intervention with an effect size of 0.2 wouldn’t make a significant difference to students in kindergarten, but it would make a huge difference for 11th graders."
Further, Professor Dylan Wiliam has also identified that meta-analyses need to control for the age of the students and the period over which the study is conducted. Hattie's does NOT do this.

Wiliam goes further and says if this is not done the results are 'GARBAGE'.



Hattie (2015) finally admitted time over which an intervention runs is critical,
"...the time over which any intervention is conducted can matter (we find that calculations over less than 10-12 weeks can be unstable, the time is too short to engender change, and you end up doing too much assessment relative to teaching). These are critical moderators of the overall effect-sizes and any use of hinge=.4 should, of course, take these into account."
Yet Hattie DOES NOT take this into account, there has been no attempt to detail and report the time over which the studies ran nor the age group of the students in the question. Clear examples are the studies Hattie used in the category of "Feedback".

Also, the landmark US study goes on to state:
"The usefulness of these empirical benchmarks depends on the degree to which they are drawn from high-quality studies and the degree to which they summarise effect sizes with regard to similar types of interventions, target populations, and outcome measures."
and also defined the criterion for accepting a research study (i.e., the quality needed):
  • Search for published and unpublished research dated 1995 or later.
  • Specialised groups such as special education students, etc. were not included.
  • Also, to ensure that the effect sizes extracted from these reports were relatively good indications of actual intervention effects, studies were restricted to those using random assignment designs (that is method 1 as explained in effect sizes) with practice-as-usual control groups and attrition rates no higher than 20% (p. 33).
NOTE: using these criteria few of the 800+ meta-analyses in VL would pass the quality test!

The What Works Clearing House (WWC)

Because of these concerns, the United States' largest educational evidence assessor, the What Works Clearinghouse, has shifted its attention away from Effect Sizes.




6 comments:

  1. Hi I know I'm late to the party - great article and has really balanced some of the 'propaganda'.Look forward to working my way through further articles. John

    ReplyDelete
    Replies
    1. thanks John, glad you took the time to read. Hopefully more teacher's will educate themselves and stop this over reliance on the so called 'evidence' gurus.

      Delete
  2. Hi, I know I'm also late to the party, but I'd like to thank you for putting this together. It helped me out a great deal as an aspiring teacher and is continuing to do so after moving over to the field of scientific research.

    The clarity of the arguments you put forward have allowed me to take this attitude of scepticism of using data simply because it's there and convenient into the field of biology and has provided a huge advantage over many of my contemporaries.
    This work I see as being of the utmost importance as the lessons I've learned here are wide reaching, but more than that, have the potential to allow for the way people think about learning to be revolutionised and there's no telling where that could take us.
    Cheers

    ReplyDelete
  3. Thanks Joseph, hoped this helped. Best wishes in your teaching journey.

    ReplyDelete
  4. Effect size works, is the test given covers a scope of learning / years rather than a year of mastery

    ReplyDelete
    Replies
    1. not sure what you mean could you explain further?

      Delete