Effect Size

'If you torture the data long enough, it will confess.' Ronald Coase

'Stats are like bikinis, they don't reveal everything.' Bobby Knight

The effect size statistic (d) is borrowed from the medical model and measures the effect of a 'treatment'. For Hattie, a 'treatment' is an influence that causes an effect on student achievement. The effect size (d) is equivalent to a 'Z-score' of a standard normal distribution. For example, an effect size of 1 means that the score of the average person in the experimental (treatment) group is 1 standard deviation above the average person in the control group (no treatment).

The medical model insists on random assignment of patients to a control or experimental group, as well as "double blindness". That is, neither the control or experimental group nor the staff know, who is getting the treatment. This is done to remove the effect of moderating variables. In addition, educational experiments need to control for the age of the students and the time period over which the study runs (see A Year's Progress).

Few of the studies that Hattie cites, use random allocation, double blindness or control for the age of students or the time over which the study runs. This casts significant doubt on the validity and reliability of his synthesis.


Hattie states that the Effect size (d) is calculated by either the random method or the fixed method (p8) :


Professor Pierre-Jérôme Bergeron (2017) points out,
'These two types of effects are not equivalent and cannot be directly compared... A statistician would already be asking many questions and would have an enormous doubt towards the entire methodology in Visible Learning and its derivatives.'
Prof Robert Slavin also discusses in detail the difference between the two methods here.

Yet, Hattie states (p12), 
'the random model allows generalisations to the entire research domain whereas the fixed model allows an estimate.'
Bergeron (2017) adds,
'in addition to mixing multiple and incompatible dimensions, Hattie confounds two distinct populations: 
1) factors that influence academic success and 
2) studies conducted on these factors.'
Prof Robert Slavin, who contributed 7 meta-analyses to VL also comments about Hattie's lack of consistency in the way effect sizes are calculated,
'Hattie includes literally everything in his meta-meta analyses, including studies with no control groups, studies in which the control group never saw the content assessed by the posttest, and so on.'
Slavin explains the need for a method to adjust effect sizes due to the different number of students tested in each study this, 
'removes a lot of the awful research that gives Hattie the false impression that everything works, and fabulously.'
Poulsen (2014) also identifies that Hattie often uses studies that do not have control groups,
'It does not appear if the many effects studies were in general investigations control groups. Control groups mentioned, but in what sense were they actually compatible with the trial groups? If not, much cannot be concluded about learning outcomes' (p3, translated from Danish).
Nielsen and Klitmøller (2017) concur,
'The meta-analyses ... do not have uniform standards for, how they measure the effect. In many meta-analyses, studies involving the effect are not related to the use of control groups' (p4, translated from Danish).
Also, Lervåg &  Melby-Lervåg (2014),
'If you do not have a control group, the effect size will be calculated only on the basis of performance on the mapping before and after the action. The effect size will then be artificially high without this being a correct image. An example of this from Hattie's book is that vocabulary programs come out with a very high effect size.'
Simpson (2017) and Bergeron (2017) give examples of how the same effect, depending on how you define the control and experimental groups, can give effects ranging from 0 to infinity!

As a result, Simpson (2017) calls into question Hattie's entire use of effect size comparisons,
'standardised effect size is a research tool for individual studies, not a policy tool for directing whole educational areas. These meta-meta-analyses which order areas on the basis of effect size are thus poor selection mechanisms for driving educational policy and should not be used for directing large portions of a country’s education budget' (p463).
You can listen to Prof Simpson's detailed podcast here and Prof Bergeron's podcast here.

Cheung & Slavin (2016). How methodological features affect effect sizes in education, support Simpson's contention that methodology impacts on the effect size,
'The findings suggest that effect sizes are roughly twice as large for published articles, small-scale trials, and experimenter-made measures, than for unpublished documents, large-scale studies, and independent measures, respectively. In addition, effect sizes are significantly higher in quasi-experiments than in randomized experiments.'
Professor Bergeron also identifies that Hattie often does not use either of the above 2 methods but rather correlation, without any qualification or explanation - see below.

The U.S. Department of Education state that the methodological standards for studies have achieved considerable professional consensus across education and other disciplines (p19).


A summary of these standards can be seen below; correlation studies DO NOT meet these standards.

Simpson (2017) is similarly critical and makes a strong argument that the benchmarks from which the effect size is calculated are different across studies and can be manipulated by different designs. Therefore effect sizes should only be compared in the most stringent of circumstances.
'...identical interventions can lead to dramatically different standardised mean differences. By contrasting them with different comparison groups, by measuring them on samples selected with a measure which correlates with the outcome measure, by increasing test length or tightening the focus of the test on the intervention, the difference becomes clearer, but it does not mean that the intervention gives more ‘educational bang for their buck’ (Higgins et al. 2013, 3).  
If one wishes to make judgements about more or less effective educational interventions, then studies must use the same comparisons, measures and range of participants' (p463).
'As such, using these ranked meta-meta-analyses to drive educational policy is misguided' (p451).
Prof Simpson continues,
'while calculating an effect size may be simple enough for a first course in statistics, there are considerable subtleties in understanding it sufficiently well to ensure that the processes of combining effect sizes in meta-analyses allows valid conclusions to be drawn' (p452).
Simpson details the 3 subtle areas which show that comparing effect sizes from different studies is unreliable (p454),

a. how experimental and control groups are chosen;
b. the range of the population from which the sample is taken;
c. the design of the achievement test.

These plus other issues will be discussed in the range of problems addressed below.

Problem 1. Hattie mostly uses correlation studies, not true experiments:

Hattie admits if you mix the above two methods up you have significant problems interpreting your data,
'combining or comparing the effects generated from the two models may differ solely because different models are used and not as a function of the topic of interest.' 
He goes on to say that, that he mostly uses the method 2 - the fixed model (p12).

However, even though Hattie takes the time to explain the above two methods, and the issue if you mix them up, most of the meta-analyses in VL do NOT use randomised control groups, as in method 1, nor before and after treatment means, as in method 2, but rather some form of correlation which is later morphed into an effect size!

In his updated version of VL 2012 (summary) he once again emphasises he mostly uses method 1 or 2 above. Again, he makes no mention of using the weaker methodology of correlation (p10).

Terry Wrigley (2018) in The power of ‘evidence’: Reliable science or a set of blunt tools? discusses the problem of Hattie's correlation quoting Hubert and Wainer (2013: 119),

'One might go so far to say that if only the value of rXY is provided and nothing else, we have a prima facie case for statistical malpractice' (p365).
Professor Bergeron also highlights this issue:
'Hattie confounds correlation and causality when seeking to reduce everything to an effect size. Depending on the context, and on a case by case basis, it can be possible to go from a correlation to Cohen’s d (Borenstein et al., 2009):
but we absolutely need to know in which mathematical space the data is located in order to go from one scale to another. This formula is extremely hazardous to use since it quickly explodes when correlations lean towards 1 and it also gives relatively strong effects for weak correlations. A correlation of .196 is sufficient to reach the zone of desired effect in Visible Learning... 
It is with this formula that Hattie obtains, among others, his effect of creativity on academic success (Kim, 2005), which is in fact a correlation between IQ test results and creativity tests. It is also with correlations that he obtains the so-called effect of self-reported grades, the strongest effect in the original version of Visible Learning. However, this turns out to be a set of correlations between reported grades and actual grades, a set which does not measure whatsoever the increase of academic success between groups who use self-reported grades and groups who do not conduct this type of self-examination.'
I've created an example of the problem with correlation here using a class of 10 students.


A moderate correlation of r = 0.69 gets converted into one of the largest effect sizes in Hattie's book of d = 1.91 - this would rank #1 on Hattie's list. 


A weak correlation of r = 0.29 gets converted into an effect size of d = 0.61 - this would rank #20.

Blichfeldt (2011) on Hattie's correlation,
'correlations or correspondence do not provide grounds for causation. Hattie mentions that correlations should not be confused with causal analyzes. It is striking that the book is first and foremost presented so that it is read easily as causal analyzes, of "what works" or leading to good test results and not that he ranks the 138 variables thereafter - as a list of disconnected factors.'
NOTE: Correlation studies do not satisfy The U.S. Department of Education's experimental design standards - see below.

Also, many of the scholars that Hattie cites comment on this problem:

DuPaul & Eckert (2012, p408) - behaviour:

'randomised control trials are considered the scientific "gold standard" for evaluating treatment effects... the lack of such studies in the school-based intervention literature is a significant concern.'
Kelley & Camilli (2007, p33)Teacher Training. Studies use different scales (not linearly related) for coding identical amounts of education. This limits confidence in the aggregation of the correlational evidence.

Studies inherently involve comparisons of nonequivalent groups; often random assignment is not possible. But, inevitably, this creates some uncertainty in the validity of the comparison (p33).

The correlation analyses are inadequate as a method for drawing precise conclusions (p34).

Research should provide estimates of the effects via effect size rather than correlation (p33).

Breakspear (2014, p13) states,

'Too often policy makers fail to differentiate between correlation and causation.'
Blatchford (2016, p94) commenting on Hattie's class size research,
'Essentially the problem is the familiar one of mistaking correlation for causality. We cannot conclude that a relationship between class size and academic performance means that one is causally related to the other.'
We are constantly warned that correlation does not imply causation! Yet, Hattie confesses: 
'Often I may have slipped and made or inferred causality' (p237).
Prof Georg Lind (2013) also questions Hattie about his use of correlation and accuses Hattie of displaying the correlation and not the effect size when it suits him. This means the effect appears small but when converted it is large. The example Lind gives is VL, p197, where Hattie cites r = 0.67 for kinesthetic learning, but when converted d = 1.81. This is a huge effect! But, Hattie rejected this study,
'It is difficult to contemplate that some of these single influences... explain more of the variance of achievement that so many of the other influences in this book' (p197). 
The beneficial effects of ice-cream on intelligence - a delicious correlation (r = 0.7): 

Convert r to an effect size = 1.96! Larger than Hattie’s best intervention so far!



From: https://www.economist.com/blogs/graphicdetail/2016/04/daily-chart

Problem 2. Student Achievement is measured in different ways or not at all - A VALIDITY problem!
The effect size should measure the change in student achievement, but this is measured in many different ways or often not at all. For example, one study measured IQ while another measured hyperactivity. So comparing these effect sizes is the classic 'apples versus oranges' problem.

Blichfeldt (2011),
'We also get no information about how "learning outcomes" are defined or measured in the studies at different levels, what tests are used, which subjects are tested and how.'
Prof Adrian Simpson (2017) details the problem of different measures of student achievement,
'Using unequal comparisons or using unspecified ones makes it impossible to compare or combine effect sizes meaningfully... 
the experimental condition in some studies and meta-analyses is the comparison condition in others' (p455).
Simpson also gives examples (p461) of specific tests designed for a particular influence, e.g., improving algebra skills, resulting in a 40% higher effect size than a standardised test on the same students!

Also, tests with more questions are more likely to result in higher effect sizes. Simpson gives examples of tests that give 400% higher effect sizes (p462).

Many of the scholars that Hattie used also comment on this problem;

DuPaul & Eckert (2012, p408),

'It is difficult to compare effect size estimates across research design types. Not only are effect size estimates calculated differently for each research design, but there appear to be differences in the types of outcome measures used across designs.'
Kelley & Camilli (2007, p7),
'ethodological variations across the studies make it problematic to draw coherent generalisations. These summaries illustrate the diversity in study characteristics including child samples, research designs, measurement, independent and dependent variables, and modes of analysis.' 
Wrigley (2018) also discusses the problem of control groups with regard to Hattie's work,
'should the control group experience the absence of the practice being trialled, or simply ‘business as usual’? 
This ambiguity concerning the control group can seriously distort attempts to calculate an ‘effect size’.
We do not learn whether teachers and teaching assistants in the control group had any access to training comparable to that of the treatment group, whether they also taught small
classes, or what ‘business as usual’ actually involved' (p363-364).
Dr. Jonathan Becker in his critique of Marzano (but relevant for Hattie) states,
'Marzano and his research team had a dependent variable problem. That is, there was no single, comparable measure of 'student achievement' (his stated outcome of interest) that they could use as a dependent variable across all participants. I should note that they were forced into this problem by choosing a lazy research design [a meta-analysis]. A tighter, more focused design could have alleviated this problem.'
Problem 3. Invalid beginning and end of treatments (Method 2):

Hattie re-interprets many meta-analyses that don't use a beginning/end of treatment methodology. The behaviour influences contain a lot of examples:

Reid et al., (2004, p132) compared the achievement of students labelled with 'emotional/behavioural' disturbance (EBD) with a 'normative' group. They used a range of measures to determine EBD, e.g., students who are currently in programs for severe behaviour problems e.g., psychiatric hospitals.

The negative effect size indicates the EBD group performed well below the normative group. The authors conclude:

'students with EBD performed at a significantly lower level than did students without those disabilities across academic subjects and settings' (p130).
Hattie interprets the EBD group as the end of the treatment group and the normative as the beginning of the treatment group. Hattie concludes that decreasing disruptive behaviour, with d = -0.69, decreases achievement significantly. This was NOT the researcher's interpretation (p133).

Yet, when Hattie uses Frazier et al (2007) the control and experimental group are reversed. The ADHD group was the control group and the normative group was the experimental group. This then gives positive effect sizes (p51). Which Hattie then interprets as improving academic achievement!

Another example, using the influence of 'self-report' grades; Falchikov and Boud (1989, p416),

'Given that self-assessment studies are, in most cases, not “true” experiments and have no experimental or control groups… staff markers were designed as the control group and self-markers the experimental group.'
So in this instance, a large effect size means the students overestimate their ability compared to staff assessment. Not, as Hattie interprets, that self-assessment improves or influences your achievement.

Problem 4. Controlling for other variables:

Related to problem 1 - the research designers usually put a lot of thought into the controlling of other variables. Random assignments and double blindness are the major strategies used. Unfortunately, most of the studies Hattie cites, do not use these strategies. This introduces major moderating variables into the study. Class size is a good example, many studies compare the achievement of small versus large classes in schools, but many schools assign lower achieving students to smaller classes, they do not use random assignment.

Thibault (2017) Is John Hattie's Visible Learning so visible? gives other examples (English translation),

'a goal of the mega-analyzes is to relativize the factors of variation that have not been identified in a study, balancing in some so the extreme data influenced by uncontrolled variables. But by combining all the data as well as the particular context that is associated with each study, we eliminate the specificities of each context, which for many give meaning to the study itself! We then lose the richness of the data and the meaning of what we try to measure.
It even happens that brings together results that are deeply different, even contradictory in their nature.
 
For example, the source of the feedback remains risky, as explained by Proulx (2017), given that Hattie (2009) claims to have realized that the feedback comes from the student and not from the teacher, but it is no less certain that his analysis focused on feedback from the teacher.
It is right to question this way of doing things since the studies quantitatively seek to control variables to isolate the effect of each. When combining data from different studies, the attempt to control the variables is annihilated. Indeed, all these studies have not necessarily sought to control the same variables in the same way, they have probably used instruments different and carried out with populations difficult to compare. So these combinations are not just uninformative, but they significantly skew the meaning.'
Nielsen & Klitmøller (2017) in 'Blind spots in Visible Learning - Critical comments on "Hattie revolution"', discuss the problem of Hattie not addressing moderating factors, the interaction of factors and the disparate operational definitions of different studies, 
'it is our assessment that in four of the five "heaviest" surveys that mentioned in connection with Hattie's cover of Feedback, it is conceptually unclear whether they are operates with a feedback term that is identical with Hattie's' (p11, translated from Danish).
Blichfeldt (2011),
'How to validly put more blurred variables into accurate calculations seems problematic...
...he allows a very low degree of precision as to what variables are included in the calculations as to what may be expected and how results can be understood. At the same time, he uses calculations and statistics that should require precision and control that it is hard to find coverage for. Which does not prevent him from producing results as very precise with two decimal places...
What he studies is summarized statistical relationships between unclear variables and skill tests.'
Claes Nilholm (2013) in It's time to critically review John Hattie confirms this problem,
'Hattie's major failure is to report summative measurements of meta-analysis without taking into account so-called moderating factors. Working methods can work better for a particular subject, a certain grade, some students and so on. Hattie believes that the significance of such moderating factors is less than one can think. I would argue that they are often very noticeable, as in the examples I reported [see problem-based learning and inductive teaching] Unless such moderating factors are taken into account, direct generalizations will be made directly' (p3).
Allerup (2015) in 'Hattie's use of effect size as the ranking of educational efforts', calls for a more sophisticated multivariate analysis,
'it is well known that analyses in the educational world often require the involvement of more dimensional (multivariate) analyses' (p8).
Hattie rarely acknowledges this problem now, but in earlier work, Hattie & Clifton (2004, p320) Identifying Accomplished Teachers, they stated:
'student test scores depend on multiple factors, many of which are out of the control of the teacher.'
Another pertinent example is from Kulik and Kulik (1992) - see ability grouping:

Two different methods produced distinctly different results. Each of the 11 studies with same-age control groups showed greater achievement average effect size in these studies was 0.87.

However, if you use the (usually 1 year older) students as the control group, The average effect size in the 12 studies was 0.02. Hattie uses this figure in the category 'ability grouping for gifted students'.

Hattie does not include the d = 0.87. I think a strong argument can be made that the result d = 0.87 should be reported instead of the d = 0.02 as the accelerated students should be compared to the student group they came from (same age students) rather than the older group they are accelerating into.

The Combination of Influences:

In addition, a study may be measuring the combination of many influences. For example, using class size, how do you remove other influences from the study? For example, time on task, motivation, behaviour, teacher subject knowledge, feedback, home life, welfare, etc.

Nielsen & Klitmøller (2017) discuss this problem in detail.

But, Hattie wavers on this major issue. In his commentary on 'within-class grouping' about Lou et al (1996, p94) Hattie does report some degree of additivity,
'this analysis shows that the effect of grouping depends on class size. In large classes (more than 35 students) the mean effect of grouping is d = 0.35, whereas in small classes (less than 26 students) the mean effect is d = 0.22.'
But in his summary, he states, 
'It is unlikely that many of the effects reported in this book are additive' (p256).
Problem 5. Effect Size Calculation Can Vary Significantly Depending on the Standard Deviation chosen.

Prof Gene Glass, the inventor of the meta-analysis, who Hattie quotes regularly, warned of this problem in his seminal paper, Integrating Findings: The Meta-Analysis of Research (1977).

Glass shows that since the effect size is calculated by dividing by the standard deviation (see formulas above) the standard deviation that is chosen can change the effect size in a significant way!

Glass gives this example (p370):
'The definition of ES appears uncomplicated, but heterogeneous group variances cause substantial difficulties. Suppose that experimental and control groups have means and standard deviations as follows:
The measure of experimental effect could be calculated either by use of Se or Sc or some combination of the two, such as an average or the square root of the average of their squares or whatever. The differences in effect sizes ensuing from such choices are huge:
The third basis of standardization—the average standard deviation—probably should be eliminated as merely a mindless statistical reaction to a perplexing choice. It must be acknowledged that both the remaining 1.00 and 0.20 are correct; neither can be ruled out as false... However, the control group mean is only one-fifth standard deviation below the mean of the experimental group when measured in control group standard deviations; thus, the average experimental group subject exceeds 58 percent of the subjects in the control group. These facts are neither contradictory nor inconsistent; rather they are two distinct features of a finding which cannot be captured by one number.'
Note: A few years after Gene Glass wrote this Cohen (1988) added another method to calculate standard deviation - the 'pooled standard deviation' which averages the variances first then finds the standard deviation. This seems to be the accepted method now and using this would get d = 0.39.

As can be seen in this example the effect size can be either 0.20, 0.33, 0.39 or 1 for the same data!

If comparing effect sizes across studies, as Hattie does, then Gene Glass warns,
'If some attempt is not made to deal with this problem, a source of inexplicable and annoying variance will be left in a group of effect-size measures' (p372).
Hattie does not do this.

A subset of this problem is - Sampling students from small or abnormal populations:

This is a well-known issue for meta-analyses for a number of reasons: effect sizes are erroneously larger (due to a smaller standard deviation) and moderating variables are exacerbated. Using such samples makes it invalid to generalise influences to the broader student population.

Professor Dylan Wiliam explains:



Simpson (2017) details this problem,
'Researchers can make legitimate design decisions which alter the standard deviation and thus report very different effect sizes for identical interventions. One such design decision is range restriction' (p456).
Simpson then insightfully explains that sampling from smaller populations is a major reason why effects for influences such as feedback, meta-cognition, etc are high while effects for whole school influences - class size, summer school, etc are low.
'One cannot compare standardised mean differences between sets of studies which tend to use restricted ranges of participants with researcher designed, tightly focussed measures and sets of studies which tend to use a wide range of participants and use standardised tests as measures' (p463).
Allerup (2015) in 'Hattie's use of effect size as the ranking of educational efforts', also identifies this problem, if one distribution has very little spread, and, moreover, lies entirely within the second sharing outer boundaries then an effect size is almost impossible to calculate (p6).

Hattie ignores these issues and uses meta-analyses from abnormal student populations, e.g., ADHD, hyperactive, emotional/behavioural disturbed and English Second Language students. Also, he uses abnormal subjects from NON-student populations, e.g., doctors, tradesmen, nurses, athletes, sports teams and military groups.

Professor John O'Neill's AMAZING letter to the NZ Education Minister regarding major issues with Hattie's research. One of the issues he emphasises is Hattie's use of students from abnormal populations.

Problem 6. Use of the same data in different meta-analyses:
Shannahan (2017, p751) provides a detailed example,
'What Hattie seems to have done is just take an average of the original effects reported in the various meta-analyses. That sometimes is all right, but it can create a lot of double counting and weighting problems that play havoc with the results. 
For example, Hattie combined two meta-analyses of studies on repeated reading. He indicated that these meta-analyses together included 36 studies. I took a close look myself, and it appears that there were only 35 studies, not 36, but more importantly, four of these studies were double counted. Thus, we have two analyses of 31 studies, not 36, and the effects reported for repeated reading are based on counting four of the studies twice each!
Students who received this intervention outperform those who didn't by 25 percentiles, a sizable difference in learning. However, because of the double counting, I can't be sure whether this is an over- or underestimate of the actual effects of repeated reading that were found in the studies. Of course, the more meta-analyses that are combined, and the more studies that are double and triple and quadruple counted, the bigger the problem becomes.'
Shannahan (2017, p752) provides another detailed example,
'this is (also) evident with Hattie's combination of six vocabulary meta-analyses, each reporting positive learning outcomes from explicit vocabulary teaching. I couldn't find all of the original papers, so I couldn't thoroughly analyze the problems. However, my comparison of only two of the vocabulary meta-analyses revealed 18 studies that weren't there. Hattie claimed that one of the meta-analyses synthesized 33 studies, but it only included 15, and four of those 15 studies were also included in Stahl and Fairbanks's (1986) meta-analysis, whittling these 33 studies down to only 11. One wonders how many more double counts there were in the rest of the vocabulary meta-analyses. 
This problem gets especially egregious when the meta-analyses themselves are counted twice! The National Reading Panel (National Institute of Child Health and Human Development, 2000) reviewed research on several topics, including phonics teaching and phonemic awareness training, finding that teaching phonics and phonemic awareness was beneficial to young readers and to older struggling readers who lacked these particular skills. Later, some of these National Reading Panel meta-analyses were republished, with minor updating, in refereed journals (e.g., Ehri et al., 2001; Ehri, Nunes, Stahl, & Willows, 2002). Hattie managed to count both the originals and the republications and lump them all together under the label Phonics Instruction—ignoring the important distinction between phonemic awareness (chldren's ability to hear and manipulate the sounds within words) and phonics (children's ability to use letter–sound relationships and spelling patterns to read words). That error both double counted 86 studies in the phonics section of Visible Learning and overestimated the amount of research on phonics instruction by more than 100 studies, because the phonemic awareness research is another kettle of fish. Those kinds of errors can only lead educators to believe that there is more evidence than there is and may result in misleading effect estimates.'
Wecker et al (2016, p30). also details examples,
'In the case of papers summarizing the results of several reviews on the same topic, the problem usually arises that a large part of the primary studies has been included in several of the reviews to be summarized (see Cooper and Koenka 2012 , p. 450 ff.). In the few meta-analyzes available so far, complete meta-analyzes of the first stage have often been ruled out because of overlaps in the primary studies involved (Lipsey and Wilson 1993 , 1197, Peterson 2001 , p.454), as early as overlaps of 25% (Wilson et al Lipsey 2001 , p. 416) or three or more primary studies (Sipe and Curlette 1997, P. 624). 
Hattie, on the other hand, completely ignores the doubts problem despite sometimes significantly greater overlaps. 
For example, on the subject of web-based learning, 14 of the 15 primary studies from the meta-analysis by Olson and Wisher ( 2002 , p. 11), whose mean effect size of 0.24 is significantly different from the results of the other two meta-analyzes on the same topic (0.14 or 0.15), already covered by one of the two other meta-analyzes (Sitzmann et al., 2006 , pp. 654 ff.)'
Kelley & Camilli (2007, p25) - Teacher Training. Many studies use the same data sets. To maintain the statistical independence of the data, only one set of data points from each data set should be included in the meta-analysis.

Hacke (2010, p83),
'Independence is the statistical assumption that groups, samples, or other studies in the meta-analyses are unaffected by each other.'
This is a major problem in Hattie's synthesis as many of the meta-analyses that Hattie averages use the same datasets - e.g., much of the same data is used in Teacher Training as is used in Teacher Subject Knowledge.


Problem 7. Inappropriate Averaging:
Hattie's averaging hides much of the complexity, for example, Professor Ivan Snook, et al, on Homework:
'There is also the difficulty which arises amalgamating a large number of disparate studies. When results of many studies are averaged, the complexity of education is ignored: variables such as age, ability, gender, and subject studied are set aside. An example of this problem can be seen in Hattie’s treatment of homework: does homework improve learning or not?

Overall, Hattie finds that the effect size of homework is 0.29. Thus a media commentator, reading a summary might justifiably report: “Hattie finds that homework does not make a difference.” When, however, we turn to the section on homework we find that, for example, the effect sizes for elementary (primary in our terms) and high schools students are 0.15 and 0.64 respectively.

Putting it crudely, the figures suggest that homework is very important for high school students but relatively unimportant for primary school students.

There were also significant differences in the effects of homework in mathematics (high effects) and science and social studies (both low effects). Results were high for low ability students and low for high ability students. The nature of the homework set was also influential. (pp 234-236). All these complexities are lost in an average effect size of 0.29' (p4).
Schulmeister & Loviscach (2014) Errors in John Hattie’s “Visible Learning”.
'The effect size given per influence is the mean value of a very broad distribution. For instance, in “Inductive Teaching” Hattie combines two meta-analyses with effect sizes of d = 0.06 and d = 0.59 to a mean effect size of d = 0.33 with a standard error of 0.035. This is like saying ”this six-sided dice does not produce numbers from 1 to 6; rather, it produces the number 3.5 in the mean, and we are pretty sure about the first decimal place of this mean value.”'
Dr. Jim Thornton Professor of Obstetrics and Gynaecology at Nottingham University said,
'To a medical researcher, it seems bonkers that Hattie combines all studies of the same intervention into a single effect size... In medicine it would be like combining trials of steroids to treat rheumatoid arthritis, effective, with trials of steroids to treat pneumonia, harmful, and concluding that steroids have no effect! I keep expecting someone to tell me I’ve misread Hattie.'
Another example from Nilholm (2013) It's time to critically review John Hattie on Inductive Teaching,


'Hattie reports two meta-analyzes. One is from 2008 and includes 73 studies related to "inductive teaching", it shows that the work method generally gives a relatively strong effect. According to a meta-analysis from 1983, which includes 24 studies of inductive teaching in natural sciences, the work method gives a weak effect. 
Hattie simply takes the mean of these two meta-analyzes and thus "inductive teaching" can be dismissed. A more reasonable conclusion would be that "inductive teaching" in science subjects has weak support but that generally it seems to be a good way of' working. Alternatively, it did not appear to work before, but later research gives a much more positive picture' (p2).
Nilholm (2013) details another example using "problem-based learning" see the detail here.

This problem is widespread in Hattie's work other examples include class size, feedback, ability grouping. Also, many of Hattie's researchers warn about averaging:

Mabe and West (1982) 
'considerable information would be lost by averaging the often widely discrepant correlations within studies' (p291).
Wrigley (2018) also discusses inappropriate averaging by Hattie and the EEF, 
'... quite dissimilar studies are thrown together and an aggregate mean of effect sizes calculated. Although some tolerance is acceptable in meta-analysis, since no two research studies are exactly alike, serious problems can arise from aggregating and averaging studies using different definitions of an issue, and based on different curriculum areas, ages and attainment levels of students, types of school, education systems, and so on... 
Indeed, Gene Glass, who originated the idea of meta-analysis, issued this sharp warning about heterogeneity: "Our biggest challenge is to tame the wild variation in our findings not by decreeing this or that set of standard protocols but by describing and accounting for the variability in our findings. The result of a meta-analysis should never be an average; it should be a graph."(Robinson, 2004: 29, my italics)' (p367).
Wigley (2018) then quotes Coe,
'One final caveat should be made here about the danger of combining incommensurable results. Given two (or more) numbers, one can always calculate an average. However, if they are effect sizes from experiments that differ significantly in terms of the outcome measures used, then the result may be totally meaningless. . .
In comparing (or combining) effect sizes, one should therefore consider carefully whether they relate to the same outcomes. . . One should also consider whether those outcome measures are derived from the same (or sufficiently similar) instruments and the same (or sufficiently similar) populations. . . It is also important to compare only like with like in terms of the treatments used to create the differences being measured. In the education literature, the same name is often given to interventions that are actually very different. It could also be that. . . the actual implementation differed, or that the same treatment may have had different levels of intensity in different studies. In any of these cases, it makes no sense to average out their effects. (Coe, 2002, my italics)' (p367).
Inconsistent use of means or medians:

Slavin (1990), 
'In pooling findings across studies, medians rather than means were used, principally to avoid giving too much weight to outliers' (p477).
Professor Maureen Hallinan. (1990)  
'The fact that the studies Slavin examines show no direct effect of ability grouping on student achievement is not surprising. The studies compare mean achievement scores of classes that are ability grouped to those that are not. Since means are averages, they reveal nothing about the distribution of scores in the two kinds of classes. Ability grouping may increase the spread of test scores while leaving the mean unchanged' (p501).
Problem 8. Equal Weightings:

Prof Gene Glass, the inventor of the meta-analysis, who Hattie quotes regularly, warned of this problem in his seminal paper, Integrating Findings: The Meta-Analysis of Research (1977).
'Precisely what weight to assign to each study in an aggregation is an extremely complex question, one that is not answered adequately by suggestions to pool the raw data (which are rarely available) or to give each study equal weight, regardless of sample size. If one is aggregating arithmetic means, a weighting of results from each study according to SRT(N) might make sense' (p358).
Fixed Methods scholars recommend weighting (Pigott, 2010, p9). Larger studies are then weighted greater. If this were done this would affect all the reported effect sizes of Hattie and his rankings would totally change.

The range of students numbers in studies that Hattie used is enormous. In the influence 'Comprehensive teaching reforms' Hattie cites Borman & D'Agostino (1996) using nearly 42 Million Students! While in the 'gender - attitudes' influence Hattie cites Cooper, Burger & Good (1980) with 219 students. These have equal weight in Hattie's work.

Shannahan (2017, p752) gives more detailed examples,
'when meta-analyses of very different scopes are combined - what if one of the meta-analyses being averaged has many more studies than the others? Simply averaging the results of a meta-analysis based on 1,077 studies with a meta-analysis based on six studies would be very misleading. Hattie combined data from 17 meta-analyses of studies that looked at the effects of students’ prior knowledge or prior achievement levels on later learning. Two of these meta-analyses focused on more than a thousand studies each; others focused on fewer than 50 studies, and one as few as six. Hattie treated them all as equal. Again, potentially misleading.'
Pant (2014, p95) verifies Shannahan's analysis and provides another detailed example:
'Hattie (2009) aggregates the mean effect sizes of the original meta-analyzes without weighting them by the number of studies received. Meta-analyzes, which are based on many hundreds of individual studies, enter the d- barometer with the same weight as meta-analyzes with only five primary studies. The consequences of this approach for the content conclusions will be briefly demonstrated by a numerical example from Hattie's (2009) data. The determined from four meta effect of the teaching method of direct instruction (Direct Instruction) is to Hattie (2009 , p 205;) d = 0.59 and thus falls into the "desired zone" ( d > 0.4). 
Direct instruction is by no means undisputed, highly structured, and teacher-centered teaching. Looking at the processed meta-analyzes one by one, it is striking that the analysis by far the largest in 232 primary studies (Borman et al., 2003 ) is the one with the least effect size (i.e. = 0.21). If the three meta-analyzes for which information on the standard error were presented were weighted according to their primary number of studies (Hill et al 2007, Shadish and Haddock 2009), the resulting effect size would be d = 0.39 and thus no longer in the 'desired' zone of action defined by Hattie.'
Wecker et al (2016, p31) give an example of using weighted averages:
'This would mean a descent from 26th place to 98th in his ranking.'
Professor Peter Blatchford in the AEU News (Vol 22 - 7, Dec 2016) also warns of this problem,
'unfortunately many reviews and meta-analyses have given them equal weighting' (p15).
Beng Huat See (2017) emphasises the issue of quality of evidence & averaging by Hattie, Marzano, and others, 
'there are studies which involved only one participant, some had no comparator groups and some involved children with specific learning difficulties or had huge attrition as large as 70%. These may form the majority of studies reporting huge positive effects. On the other hand, the few good quality studies may report small effects. 
Averaging effect sizes from across studies of different quality giving equal weights to all can lead to misleading conclusions' (p10). 
Proulx (2017) and Thibault (2017) also question Hattie's averaging.

Example - Visual Perception Programs

AuthorsYrStudentsEffect SizefractionWeighted Effect
Kavale198044000.70.010.01
Kavale198144000.770.010.01
Kavale19823250000.810.830.67
Kavale198444000.090.010
Kavale198444000.180.010
Kavale & Forness2000500000.760.130.1
3926000.550.79


Hattie's effect size is d = 0.55. But if we weight according to the number of students (with the assumption studies reporting no students are assigned the lowest number of students, 4,400 (highlighted yellow). We get a weighted effect size d = 0.79 shooting this up from #35 to #7.

Nielsen & Klitmøller (2017) in 'Blind spots in Visible Learning - Critical comments on "Hattie revolution"', also show this problem in their detailed analysis of Hattie's use of feedback studies- see feedback.


Problem 9. Different Definition of Variables:
Hattie's synthesis abounds with this problem. 

Yelle et al (2016) What is visible from learning by problematization: a critical reading of John Hattie's work explain the problem,
'In education, if a researcher distinguishes, for example, project-based teaching, co-operative work and teamwork, while other researchers do not distinguish or delimit them otherwise, comparing these results will be difficult. It will also be difficult to locate and rigorously filter the results that must be included (or not included) in the meta-analysis. Finally, it will be impossible to know what the averages would be. 
It is therefore necessary to define theoretically the main concepts under study and to ensure that precise and unambiguous criteria for inclusion and exclusion are established. The same thing happens when you try to understand how the author chose the studies on e.g., problem-based learning. The word we find is general, because it compiles a large number of researches, dealing with different school subjects. It should be noted that Hattie notes variances between the different school subjects, which calls for even greater circumspection in the evaluation of the indicators attributed to the different approaches. 
This is why it is crucial to know from which criteria Hattie chose and classified the metaanalyses retained and how they were constituted. How do the authors of the 800 metaanalyses compiled in Hattie (2009) define, for example, the different approaches by problem? In other words, what are the labels that they attach to the concepts they mobilize?
As for the concepts of desirability and efficiency from which these approaches must be located, they themselves are marked by epistemological and ideological issues. What do they mean? According to what types of knowledge is a method desirable? In what way is it effective? What does it achieve?
Hattie's book does not contain information on these important factors, or when it does, it does so too broadly. This vagueness prevents readers from judging for themselves the stability of so-called important variables, their variance or the criteria and methods of their selection. The lack of clarity in the criteria used for the selection of studies is therefore a problem.'
Pant (2014, p85) is also critical of Hattie aggregating a wide variety of interventions under one label - 
'which calls into question the theorectical relevance of the analysis.'
A great example of this is in the studies on class size.

A comparison of the studies shows different definitions for small and normal classes, e.g. one study defines 23 as a small class but another study defines 23 as a normal class. So comparing the effect size is not comparing the same thing!

Schulmeister & Loviscach (2014) Errors in John Hattie’s “Visible Learning”.
'Even where he has grouped meta-analyses correctly by their independent variables such as instructional interventions, Hattie has in many cases mixed apples and oranges concerning the dependent variables. In some groupings, however, both the independent and the dependent variables do not match easily. For instance, in the group “feedback”, a meta-analysis using music to reinforce behavior is grouped with other studies using instructional interventions that are intended to elicit effects on cognitive processes.'
'Many of the meta-analyses do not really match the same effect group (i.e., the influence) in which Hattie refers to them. For instance, in the group “feedback”, studies investigating the effect of student feedback on teachers are mixed with studies that examine the effect of teacher feedback on students.'
Nielsen & Klitmøller (2017) in 'Blind spots in Visible Learning - Critical comments on "Hattie revolution"', discuss in detail the many problems of different definitions of feedback and large versus small class sizes - see feedback.


Problem 10. Quality of Studies:
'Extraordinary claims require extraordinary evidence.' Carl Sagan
Hattie's constant proclamation (VL 2012 summary, p3),
'it is the interpretations that are critical, rather than data itself.'
is worrying, as it is opposite to the Scientific Method paradigm as Professor Ivan Snook et al (2009, p2) explain:
'Hattie says that he is not concerned with the quality of the research... of course, quality is everything. Any meta-analysis that does not exclude poor or inadequate studies is misleading, and potentially damaging if it leads to ill-advised policy developments. He also needs to be sure that restricting his data base to meta-analyses did not lead to the omission of significant studies of the variables he is interested in.'
Professor John O'Neill writes a significant letter to the NZ Education Minister regarding the poor quality of Hattie's research, in particular, the overuse of studies about University, graduate or pre-school students and the danger of making classroom policy decision without consulting other forms of evidence, e.g., case and naturalistic studies. 
'The method of the synthesis and, consequently, the rank ordering are highly problematic' (p7).
Beng Huat See (2017), emphasises the lack of quality in the evidence by Hattie, Marzano and others, 
'there are several problems with relying on such evidence taken from meta-analyses of meta-analyses for policy and practice. 
First, much of it is not particularly robust (small-scale, involving non-randomisation of participants, based on summaries of effects across a wide range of subjects and age groups). 
Second, no consideration was taken of the quality of research in the synthesis of existing evidence. For example, there are studies which involved only one participant, some had no comparator groups and some involved children with specific learning difficulties or had huge attrition as large as 70%. These may form the majority of studies reporting huge positive effects. On the other hand, the few good quality studies may report small effects. Averaging effect sizes from across studies of different quality giving equal weights to all can lead to misleading conclusions' (p10). 
Schulmeister & Loviscach (2014) Errors in John Hattie’s “Visible Learning”.
'Many of the meta-analyses used by Hattie are dubious in terms of methodology. Hattie obviously did not look into the individual empirical studies that form the bases of the meta-analyses, but used the latter in good faith.'
Nielsen & Klitmøller (2017) in 'Blind spots in Visible Learning - Critical comments on "Hattie revolution"', also discuss the problems of quality using examples from VL, p75 and 196 - see feedback.
'Hattie does not deal with the potential problems in his own investigation but instead refers to others who have to deal with problems in connection with meta-analyses generally. In other words, Hattie is not directly concerned about the quality of his own investigation. 
In some selected contexts nevertheless, Hattie does throw out studies based on quality, but this neither consistent nor systematic' (p10 translated from Danish).
Prof Georg Lind (2013) confirms this and also uses the example from VL, p196ff. He accused Hattie of disregarding studies that do not suit him.

The Encyclopedia of Measurement and Statistics outlines the problem of quality: 
'many experts agree that a useful research synthesis should be based on findings from high-quality studies with methodological rigour. Relaxed inclusion standards for studies in a meta-analysis may lead to a problem that Hans J. Eysenck in 1978 labelled as garbage in, garbage out.'
Or in modern termsDr. Gary Smith (2014, p25),
'garbage in, gospel out.' 
Many of the researchers that Hattie uses warn about the quality of studies, e.g., Slavin (1990, p477)
'any measure of central tendency in a meta-analysis... should be interpreted in light of the quality and consistency of the studies from which it was derived, not as a finding in its own right. 
"best evidence synthesis” of any education policy should encourage decision makers to favour results from studies with high internal and external validity—that is, randomised field trials involving large numbers of students, schools, and districts.' Slavin (1986)
Newman (2004, p200) re-emphasises the need for quality
'it could also be argued that the important thing is how the effect size is derived. If the effect size is derived from a high quality randomised experiment then a difference of any size could be considered important.'
Hacke (2010, p56), Higgins (2017) and Bergeron (2017) all state the research design can also be a major source of variance in studies.

The U.S. Department of Education has set up the National Center for Education Research whose focus is to investigate the quality of educational research. Their results are published in the What Works Clearing House. They also publish a Teacher Practice Guide which differs markedly from Hattie's results - see Other Researchers.

Importantly they focus on the QUALITY of the research and reserve their highest ratings for research that use randomised division of students into a control and an experimental group. Where students are non-randomly divided into a control and experimental group for what they term a quasi-experiment, a moderate rating is used. However, the two groups must have some sort of equivalence measure before the intervention. A low rating is used for other research design methods - e.g., correlation studies.

However, once again, Hattie ignores these issues and makes an astonishing caveat, there is, 
'no reason to throw out studies automatically because of lower quality' (p11).
Problem 11. Time over which each study ran:
Given Hattie interprets an effect size of 0.40 as equivalent to 1 year of schooling, and his polemic related to this figure:
'I would go further and claim that those students who do not achieve at least a 0.40 improvement in a year are going backwards...' (p250).
In terms of teacher performance, he takes this one step further by declaring teachers who don't attain up to an effect size of 0.40 are 'below average'Hattie (2010, p87).

This means, as Professor Dylan Wiliam points out, that studies need to be controlled for the time over which they run, otherwise legitimate comparisons cannot be made.

Professor Wiliam, who also produced the seminal research, 'Inside the black box', also reflects on his own research and cautions (click here for full quote):
'it is only within the last few years that I have become aware of just how many problems there are. Many published studies on feedback, for example, are conducted by psychology professors, on their own students, in experimental sessions that last a single day. The generalizability of such studies to school classrooms is highly questionable. 
In retrospect, therefore, it may well have been a mistake to use effect sizes in our booklet 'Inside the black box' to indicate the sorts of impact that formative assessment might have.

I do still think that effect sizes are useful... If the effect sizes are based on experiments of similar duration, on similar populations, using outcome measures that are similar in their sensitivity to the effects of teaching, then I think comparisons are reasonable. Otherwise, I think effect sizes are extremely difficult to interpret.'
Hattie (2015) finally admitted this was an issue:
'Yes, the time over which any intervention is conducted can matter (we find that calculations over less than 10-12 weeks can be unstable, the time is too short to engender change, and you end up doing too much assessment relative to teaching). These are critical moderators of the overall effect-sizes and any use of hinge=.4 should, of course, take these into account.'
Yet this has not affected his public pronouncements nor additions or reductions of studies to his database. He has not made any adjustment to his section on feedback, whereas Professor Wiliam states many of the studies are on university students over 1 DAY. Hattie does not appear to take TIME into account!

The section A YEARS PROGRESS? goes into more detail about this issue.

Problem 12. The assumption of Normality:

Allerup (2015) in 'Hattie's use of effect size as the ranking of educational efforts', shows that deviations to this assumption in the form of skewed or Cauchy distributions, which have wider tails than normal distributions, give very different effect size measures and therefore it becomes difficult for appropriate interpretations of effect size (p10). 

Allerup gives the examples that International evaluations under The OECD (PISA) and The IEA (TIMSS) are not normally distributed (p7).


This all leads to significant criticism of VL:

Emeritus Professor Ivan Snook et al
'Any meta-analysis that does not exclude poor or inadequate studies is misleading and potentially damaging' (p2).
Professor Ewald Terhart:
'It is striking that Hattie does not supply the reader with exact information on the issue of the quality standards he uses when he has to decide whether a certain research study meta-analysis is integrated into his meta-meta-analysis or not. Usually, the authors of meta-analyses devote much energy and effort to discussing this problem because the value or persuasiveness of the results obtained are dependent on the strictness of the eligibility criteria' (p429).
'...I have demonstrated that there are problems with the dependent variable, the learning yield, ie the effect of the intervention. It is weakly understood and there is an unpredictable contradiction between the theory of learning theory and the theory of education theory' (p15, translated from Danish).
Kelvin Smythe:  
'I keep stressing the research design and lack of control of variables as central to the problem of Hattie’s research.'
David Didau gives an excellent overview of Hattie's effect sizes, cleverly using the classic clip from the movie Spinal Tap, where Nigel tries to explain why his guitar amp goes up to 11.

Dr. Neil Hooley, in his review of Hattie - talks about the complexity of classrooms and the difficulty of controlling variables, 
'Under these circumstances, the measure of effect size is highly dubious' (p44).
Neil Brown
'My criticisms in the rest of the review relate to inappropriate averaging and comparison of effect sizes across quite different studies and interventions.'
The USA Government Funded Study on Educational Effect Size Bench Marks -
'The usefulness of these empirical benchmarks depends on the degree to which they are drawn from high-quality studies and the degree to which they summarise effect sizes with regard to similar types of interventions, target populations, and outcome measures.'
and also defined the criterion for accepting a research study, i.e., the quality needed (P33):
Search for published and unpublished research dated 1995 or later. 
Specialised groups such as special education students, etc. were not included. 
studies were restricted to those using random assignment designs (that is method 1) with practice-as-usual control groups and attrition rates no higher than 20%.
NOTE: using these criteria virtually NONE of the 800+ meta-analyses in VL would pass the quality test!

The U.S. Department of Education standards:

The intervention must be systematically manipulated by the researcher, not passively observed.

The dependent variable must be measured repeatedly over a series of assessment points and demonstrate high reliability.


Method 1 (random allocation) is the gold standard

Method 2 is accepted but with a number of caveats. They use the phrase quasi-experimental design, which compares outcomes for students, classrooms, or schools who had access to the intervention with those who did not but were similar in observable characteristics. In this design, the study MUST demonstrate baseline equivalence.

In other words, the students can be broken into a control and experimental group (without randomization), but the two groups must display equivalence at the beginning of the study. They go into great detail about this here


However, the rating of these types of studies is 'Meets WWC Group Design Standards with Reservations.'

So at BEST most of the studies used by Hattie would be classified by The U.S. Department of Education as 'Meets WWC Group Design Standards with Reservations.'



But Hattie uses Millions of students!

A large number of students used in the synthesis seems to excuse Hattie's from the usual validity and reliability requirements. For example, Kuncel (2005) has over 56,000 students and reports the highest effect size of d=3.10 but it does not measure what Hattie's says -  a self-report grade in the future; but rather, student honesty with regard to their GPA a year ago. So this meta-analysis is not a valid or reliable study of the influence of self-report grades. The 56,000 students are totally irrelevant. 

Note, many of the controversial influences have only 1 or 2 meta-analyses as evidence.

Professor Pierre-Jérôme Bergeron - on Hattie's huge numbers: 
'We cannot allow ourselves to simply be impressed by the quantity of numbers and the sample sizes; we must be concerned with the quality of the study plan and the validity of collected data.'
Nepper Larsen (2014) Know thy impact – blind spots in John Hattie’s evidence credo.
'the megalomaniac additive annexation of all sorts of meta-analyses is not concerned with methodologically critical self-reflections, nor with validity claims, i.e., it does not specify the limits to what can be said and made commensurable. The risk is that knowledge in the collected empirical data piles disappears when it is formalised in a second-, third-, and-fourth-order perspective' (p6).
Professor Svein Sjøberg (2012) also argues that Hattie uses a rhetorical strategy of an overwhelming number of meta-analyses instead of supporting a hypothesis to heighten the effects of the meta-analyses public impact.

Prof Terry Wrigely (2015) in Bullying by Numbers, gives a detailed analysis of this problem.

David Weston gives a good summary of issues with Effect Sizes:
2min - contradictory results of studies are lost by averaging
4min 30sec - Reports of studies are too simplified and detail lost
5min - What does effect size mean?
6min 15 sec - Hattie's use of effect size
7min - Issues with effect size
8min 40sec - problems with spread of scores (standard deviation)
9min 30sec - need to check details of Hattie's studies
10min 30sec - problem with Hattie's hinge point d=0.40 (see A Year's Progress)
16min 50secs - Prof Dylan Wiliam's seminal work - 'Inside the Black Box', is an example of research that has been oversimplified by Educationalists - e.g., 'writing objectives on the board' but other more important findings have been lost.
18min - Context is king

David Weston uses a great analogy of a chef with teaching (5min onwards).

John Oliver gives a funny overview of the problems with Scientific Studies:

Another overview the issues with published studies-


A short video on the issues with Social Science Research


No comments:

Post a Comment