A Year's Progress ?

Since all of Hattie's CLE calculations were shown to be incorrect he has changed focus by promoting that an effect size (d) = 0.40 is the 'hinge' point' for identifying what is and what is not effective, and is equivalent to advancing a child’s achievement by 1 year (VL 2012 summary p3).

Hattie says, "I would go further and claim that those students who do not achieve at least a 0.40 improvement in a year are going backwards..." (p250). This interpretation is a major concern for a number of reasons, not the least of which is Hattie's financial interest in Teacher assessment programs and performance pay.

Although he did backtrack in his summary VL 2012 publication, "I did not say that we use this hinge point for making decisions, but rather we used it to start discussions" (p14).

Using the Australian NAPLAN data an average of d = 0.40 is obtained for junior years (Yr 3, 5, 7& 9). Although, The United States Department of Education has commissioned a more detailed study of the effect size benchmarks, for K-12, using the national testing across the USA (table of results below):

Translating the Statistical Representation of the Effects of Education Interventions Into More Readily Interpretable Forms (2012). United States: U.S. Dept of Education, National Center for Special Education Research, Institute of Education Sciences,


Hattie acknowledges these results in his summary VL 2012 publication, but uses them to justify his 'hinge point' d = 0.40 and says, "the effects for each year were greater in younger and lower in older grades ... we may need to expect more from younger grades (d > 0.60) than for older grades (d > 0.30)" (p14).

So Hattie's minor adjustment misses the HUGE variation from young to older students. He also does not address the use of much older college level students and practising professionals, like doctors and nurses in many of his meta-analyses, e.g., 'self-report grades', 'problem-based learning', 'worked examples', etc.

The HUGE variation is a major confounding variable in Hattie's method of comparing effect sizes. The difference in two different influences could be simply due to the age of the students being measured.

Further, Professor William Dylan has also identified that meta-analyses need to control for the time period over which the study is conducted. Hattie's does NOT do this.

This landmark US study goes on to state:
"The usefulness of these empirical benchmarks depends on the degree to which they are drawn from high-quality studies and the degree to which they summarise effect sizes with regard to similar types of interventions, target populations, and outcome measures." 

and also defined the criterion for accepting a research study (i.e., the quality needed):
  • Search for published and unpublished research dated 1995 or later.
  • Specialised groups such as special education students, etc. were not included.
  • Also, to ensure that the effect sizes extracted from these reports were relatively good indications of actual intervention effects, studies were restricted to those using random assignment designs (that is method 1 as explained in effect sizes) with practice-as-usual control groups and attrition rates no higher than 20% (P33).

NOTE: using these criteria virtually NONE of the 800+ meta-analyses in VL would pass the quality test!