The 23 meta-analyses Hattie cities are:
"A common criticism is that it combines 'apples with oranges' and such combining of many seemingly disparate studies is fraught with difficulties. It is the case, however, that in the study of fruit nothing else is sensible" (VL, p. 10).
"appears to be a chimera; a monster stitched together from quite disparate things."
"If learning strategies included in the meta-analysis are not consistent and logical, then beware! For example, if you find a meta-analysis that groups together “feedback” strategies including teacher praise, computer instruction, oral negative feedback, timing of feedback, and music as reinforcement, does that sound consistent to you?" (p. 6)
Dylan Wiliam had a humorous response to this issue,
"This underscores the importance of adequate theorization, identified by Kelley (1927). The "jingle fallacy" is assuming that two things with the same name are in fact the same, while the "jangle fallacy" is assuming that things with different names, are, in fact different."
"nearly all of it is based on what happens in regular classrooms by regular teachers... 99.+% is based on classrooms run by ordinary teachers, not like in Psychology where they use under-graduate students, they bring in outsiders and this kinda stuff." (2017, ResearchEd)
"Only in a very broad sense has this study something to do with feedback; it is behavioristic reinforcement" (p. 9).
"A meta-analysis by Rummel and Feinberg (1988), with a reported effect size of +0.60, is perhaps the most humorous inclusion in the Hattie & Timperley (2007) meta-meta-analysis. It consists entirely of brief lab studies of the degree to which being paid or otherwise reinforced for engaging in an activity that was already intrinsically motivating would reduce subjects’ later participation in that activity. Rummel & Feinberg (1988) reported a positive effect size if subjects later did less of the activity they were paid to do. The reviewers decided to code studies positively if their findings corresponded to the theory (i.e., that feedback and reinforcement reduce later participation in previously favored activities), but in fact their “positive” effect size of +0.60 indicates a negative effect of feedback on performance.
I could go on (and on), but I think you get the point. Hattie’s meta-meta-analyses grab big numbers from meta-analyses of all kinds with little regard to the meaning or quality of the original studies, or of the meta-analyses."Janson (2018) also confirms this,
"the positive effect size found indicates a negative effect of that feedback on the performance of that activity."Some studies that Rummel and Feinberg (1988) used are shown below and Slavin is correct,
The regulatory body in mine and Hattie's jurisdiction, The Victorian Institute of Teaching (2018), publish dismissal proceedings for teachers they deem unfit to teach.
"The evidence of possible serious misconduct or lack of fitness to teach ... was:
Inappropriate gift buying, described by students as bribery, such as sweets and other items."
"that although reinforcement strategies are generally highly effective, no single strategy is uniformly effective across all subjects or behaviors." (p. 474)
"All the meta-analyses on the relation of the quality of teaching to learning come from student ratings of teachers by college and university students. It appears that student rating of the quality of teachers and teaching is related to learning outcomes, although the feedback that is provided to teachers rarely leads to improvements in their teaching or the effectiveness of the courses." (VL, p. 115)
"I'm not even sure there is a concept such as formative or summative assessment." (@ 30mins)
Yet, in Visible Learning he published "providing formative evaluation" with one of the largest ES = 0.90 (VL, p. 162)
It is ironic that one of the studies that Hattie used, Rummel & Feinberg (1988), warned of the major problem of combining disparate studies as Hattie has done,
"It is argued that by including studies that claim to examine this theory but are in reality, not adequately operationalizing the theoretical propositions, this only serves to obscure the true nature of this area" (p. 160).Prof Terry Wrigley (2015) in Bullying by Numbers, gave an English humorist critique of Hattie's method,
"Its method is based on stirring together hundreds of meta-analyses reporting on many thousands of pieces of research to measure the effectiveness of interventions.
This is like claiming that a hammer is the best way to crack a nut, but without distinguishing between coconuts and peanuts, or saying whether the experiment used a sledgehammer or the inflatable plastic one that you won at the fair" (p. 5).
"Any literature review involves making balanced judgements about diverse studies. A major reason for the development of meta- analysis was to find a more systematic way to join studies, in a similar way that apples and oranges can make fruit salad. Meta-analysis can be considered to ask about “fruit” and then assess the implications of combining apples and oranges, and the appropriate weighting of this combination." (p. 3)
Hattie & Hamilton (2020) continue,
"Unlike traditional reviews, meta-analyses provide systematic methods to evaluate the quality of combinations, allow for evaluation of various moderators, and provide excellent data for others to replicate or recombine the results. The key in all cases is the quality of the interpretation of the combined analyses. Further, as noted above, the individual studies can be evaluated for methodological quality." (p. 4)
"...the significant heterogeneity in the data shows that feedback cannot be understood as a single consistent form of treatment." (p. 1)
While not directly acknowledging the wide variety of issues in VL, Hattie and his co-authors completely excluded 8 of the original 23 meta-analyses and partially excluded a further 11 meta-analyses, resulting in a substantially reduced ES of 0.48 - details in Feedback Revisited.
This is an amazing turn around given Hattie's consistent claims that all of the studies he used were about regular students in regular classrooms, e.g. Hattie in 2017, Melbourne, ResearchEd address,
"nearly all of it is based on what happens in regular classrooms by regular teachers... 99.+% is based on classrooms run by ordinary teachers, not like in Psychology where they use under-graduate students, they bring in outsiders and this kinda stuff." (@9mins)
"only 131 studies, or 4%, were considered appropriate for reaching some type of valid conclusion based on their selection criteria" (p. 218).They detail that studies differed in many respects (p. 217):
"Clearly, the range of feedback definitions is wide... how is it possible to argue for feedback effects without considering the nuances and differences among the studies?" (p. 217).
Proulx (2017) in "Critical Essay on the Work of John Hattie for the Teaching of Mathematics", observes that, Hattie's definition of feedback is not consistent with the collection of feedback studies he cites and therefore has nothing to do with his aim of "what works best".
Nielsen & Klitmøller (2017) in "Blind spots in Visible Learning - Critical comments on the 'Hattie revolution'", discuss in detail the many problems of Hattie's synthesis of feedback studies. They start with Hattie's definition of feedback,
"... feedback is information provided by an agent (e.g., teacher, peer, book, parent, or one’s own experience) about aspects of one’s performance or understanding. For example, a teacher or parent can provide corrective information, a peer can provide an alternative strategy, a book can provide information to clarify ideas, a parent can provide encouragement, and a learner can look up the answer to evaluate the correctness of a response. Feedback is a 'consequence' of performance" (VL, p. 174).
"In summary, feedback is what happens second, is one of the most powerful influences on learning, occurs too rarely..." (VL, p. 178).
"it is our assessment that in four of the five 'heaviest' surveys that mentioned in connection with Hattie's cover of feedback, it is conceptually unclear whether they are operates with a feedback term that is identical with Hattie's" (p. 11, translated from Danish).Furthermore, they state (p. 10),
"The breadth of the phenomenon of feedback varies clearly in the meta-analyses used."
"... we will come closer to look at five of the meta-analyses that Hattie builds his calculation on... Hattie's feedback area consists of 23 meta-analyses including 67,931 people, 5 of them are a special heavy because they include 62,761 people corresponding to 92% of the total sample" (p. 11).
They define their criteria for examination of the studies (p. 11):
"...our analysis shows that the transparency is low in two out of five studies, also only one of the five studies is consistently working with a control group design.
...the study by Kluger and DeNisi (1996), that Hattie (VL, p175) denotes 'the most systematic study addressing the effects of various types of feedback' has an effect size of d = 0.38 - i.e., a much lower impact assessment than the 0.73 ...other than that 38 percent of the surveys that are included in Kluger and DeNisi's study, a negative effect on the learning process - which is moreover contrary to Hattie's assumption that 'almost everything works'.
Kluger and DeNisi (1996) therefore denote feedback as a two-fold sword that both can lead to the student either learning significantly more or significantly less" (p. 11).
"If one would have to choose between Hattie's average measure of the effect size of a certain influencing factor or the measure that a high-quality meta-analysis presents, we recommend the later. For example, Hattie takes up a systematic, high-quality meta-study of Kluger & DeNisi (1996) which deals with feedback and where the effect size is = 0.38 (p. 175).
Maybe you can have more confidence in that study and its value than the average Hattie produces (= 0.73)." (p. 27).
Professor Dylan Wiliam confirms 32% of studies show feedback has a negative effect (see video below).
Prof Richard E. Clark goes through what we can use from the Kluger & Denisi study here @ 49 minutes.
Busch & Watson (2019). The Science of Learning: 77 Studies That Every Teacher Needs to Know. Also detail that, the negative result of feedback, in this study, is one of the most important findings in the research (go to 1hr, 35 mins).
So there is strong counter to Hattie's disparaging claim that,
"When teachers claim that they are having a positive effect on achievement or when a policy improves achievement, this is almost always a trivial claim: Virtually everything works.
One only needs a pulse and we can improve achievement" (VL, p. 16).Prof Wiliam details more problems with the feedback research - most of the studies are on university students and 85% of the feedback is ONE event lasting minutes!!! He then goes further and says that if you compare these different studies on feedback (as Hattie does),
"Your results are meaningless."
"First, Kluger and DeNisi focused on the way feedback affects behaviour – not how it affects learning. Later authors extended Kluger and DeNisi’s conclusions to argue that feedback has powerful effects on learning – but this isn’t fully justified by the original research.Second, Kluger and DeNisi included a range of studies – including those testing the effect of feedback on workers’ use of ear protection, hockey players’ body checks, and people’s extra-sensory perception (apparently feedback helps). Only nineteen of the 131 studies included were in schools and most focused on changing classroom behaviour – not learning."
"Specifically, on Feedback, the Toolkit provides some more specific references to back up its very general claims, but many of these are over 20 years old and currently unobtainable. Seven more detailed references are given, each with an ‘effect size’, but these range from .97 to .20. Which is to be believed? Summaries follow, in highly technical language, mostly without indicating which stage or subject, what kind of learning, what kind of feedback, which countries the research took place in, and so on. Some of the sources are very critical of particular types of feedback...
Meta-analyses are used in Medicine to enable researchers to complement the reading of other research, though not to substitute for it; for example, if experiments have been based on small samples, averaging the results can suggest a general trend.
But the medical literature contains serious warnings against the misuse of meta-analysis. Statisticians are warned not to mix together different treatments, types of patient or outcome measures – the ‘apples and pears’ problem. If the original results differ strongly, they are advised to highlight the difference, not provide a misleading average. This is exactly what has not happened in the Toolkit, which should never have provided an average score for “Feedback” since the word has so many meanings" (p. 6).
The Problem With Most Feedback Studies
Ruiz-Primo & Li (2013) detail the quality issues which we consistently see in Hattie's synthesis,
"A high percentage of papers investigating the impact of feedback did so without using a control group...
Confounded effects, rarely mentioned in the synthesis and meta-analyses, pose another threat to validity when interpreting results of feedback studies" (p. 218).
"most of the studies do not provide information about the reliability and validity of the instruments used to measure the effects of feedback on the selected outcomes. The validity of feedback studies is threatened by a failure to attend to the technical characteristics of the instruments used to measure learning outcomes... Given these measures with ambiguity in technical soundness, can we fully trust results reported in synthesis and meta-analysis studies?
... there is an issue of ecological validity. For a research study to possess ecological validity and its results to be generalizable, the methods, materials, and setting of the study must sufficiently approximate the real-life situation that is under investigation. Most of the studies reported are laboratory-based or are conducted in classrooms but under artificial conditions (e.g., students were asked to identify unfamiliar uses of familiar objects)...
Furthermore, a high percentage of the studies focus on written feedback, and only a few on oral or other types of feedback, although oral feedback is more frequently observed in teachers’ daily assessment practices (see Hargreaves et al., 2000).
...we argue that formative feedback, when studied in the classroom context, is far more complex than it tends to appear in most studies, syntheses, or meta-analyses. Feedback practice is more than simply giving students feedback orally or in written form with externally or self-generated information and descriptive comments. We argue that feedback that is not used by students to move their learning forward is not formative feedback. We thus suggest that feedback needs to be examined more closely in the classroom setting, which should ultimately contribute to an expanded and more accurate and precise definition" (p. 219).
"Research has made clear that students hardly read teachers’ written feedback or know how to interpret it (Cowie, 2005a, 2005b)" (p. 225).
"most of the publications on formative assessment and feedback include examples of strategies and techniques that teachers can use. Most of them, however, do not provide empirical evidence of the impact of these strategies on student learning; nor do they link them to contextual issues that may affect the effectiveness of the strategies...
there is a lack of studies conducted in real classrooms—the natural setting—where it would be important to see evidence that feedback strategies have substantive impact. Moreover, few studies have focused on feedback over extended periods or on factors that can moderate or mediate the effectiveness of feedback. Therefore, we cannot generalize what we know from the literature to classroom practices...
Rather than persisting with our common belief that feedback is something doable for teachers, we should strive to study formative assessment practices in the classroom, including feedback, to help teachers and students to do better. Given these unanswered questions, we need different and more trustworthy strategies of inquiry to acquire firsthand knowledge about feedback in the classroom context and to systematically study its effects on student learning" (p. 226).
thanks for your gr8 work, Hattie must be taking notice, he just removed the Rummell and Standley study. But, he has a lot more studies to remove!
ReplyDelete