Feedback

Effect Size d= 0.73 (Hattie's Rank=10).

A short video summary of the arguments on this page - here.


The studies Hattie cities:


These studies are an example of a major criticism of meta-analyses. Hattie acknowledges this and his defense is,
'A common criticism is that it combines "apples with oranges" and such combining of many seemingly disparate studies is fraught with difficulties. It is the case, however, that in the study of fruit nothing else is sensible' (VL, p. 10).
Greg Ashman details similar problems with the EEF's top strategy, meta-cognition, which Ashman says,
"appears to be a chimera; a monster stitched together from quite disparate things."
Rummel & Feinberg (1988),
"It is argued that by including studies that claim to examine this theory but are in reality, not adequately operationalizing the theoretical propositions, this only serves to obscure the true nature of this area" (p. 160).
Prof Terry Wrigley (2015) in Bullying by Numbers, gives an English humorist critique of Hattie's method,
"Its method is based on stirring together hundreds of meta-analyses reporting on many thousands of pieces of research to measure the effectiveness of interventions.  
This is like claiming that a hammer is the best way to crack a nut, but without distinguishing between coconuts and peanuts, or saying whether the experiment used a sledgehammer or the inflatable plastic one that you won at the fair" (p. 5).
Ruiz-Primo & Li (2013) Examining formative feedback in the classroom context: New research perspectives, reviewed over 9,000 studies on feedback (most of the studies Hattie used) and decided only 238 were of high enough quality to use (p. 217). But they also warn that,
"only 131 studies, or 4%, were considered appropriate for reaching some type of valid conclusion based on their selection criteria" (p. 218).
They detail that studies differed in many respects (p. 217):



They conclude,
"Clearly, the range of feedback definitions is wide... how is it possible to argue for feedback effects without considering the nuances and differences among the studies?" (p. 217).
Many other scholars also show Hattie just combines all these different studies together without regard for their differences, e.g.,

Nielsen & Klitmøller (2017) in Blind spots in Visible Learning - Critical comments on the "Hattie revolution", discuss in detail the many problems of Hattie's synthesis of feedback studies. They start with  Hattie's definition of feedback,
"... feedback is information provided by an agent (e.g., teacher, peer, book, parent, or one’s own experience) about aspects of one’s performance or understanding. For example, a teacher or parent can provide corrective information, a peer can provide an alternative strategy, a book can provide information to clarify ideas, a parent can provide encouragement, and a learner can look up the answer to evaluate the correctness of a response. Feedback is a 'consequence' of performance" (VL, p. 174).
"In summary, feedback is what happens second, is one of the most powerful influences on learning, occurs too rarely..." (VL, p. 178). 
They then detail a significant problem with Hattie's work in general but with the influence of feedback in particular, i.e., the different definitions of variables,
"it is our assessment that in four of the five 'heaviest' surveys that mentioned in connection with Hattie's cover of feedback, it is conceptually unclear whether they are operates with a feedback term that is identical with Hattie's" (p. 11, translated from Danish).
Furthermore, they state (p. 10),
"The breadth of the phenomenon of feedback varies clearly in the meta-analyses used."
They then go into more detail,
"... we will come closer to look at five of the meta-analyses that Hattie builds his calculation on... Hattie's feedback area consists of 23 meta-analyses including 67,931 people, 5 of them are a special heavy because they include 62,761 people corresponding to 92% of the total sample" (p. 11).
This also shows the major issue of how to weight studies and how different weightings derive totally different effect sizes (see Effect Size).

They define their criteria for examination of the studies (p. 11):

1. Are the surveys valid, do they measure what Hattie says, i.e., feedback?

2. Are the meta-analyses transparent, so it is possible to examine the quality of the individual studies in the meta-analyses?

3. The use of randomized control group studies? Studies which use control groups are of higher quality.


AuthorValidityTransCon. Grps?
Lysakowski &Walberg, 1980Focus on reinforcement techniques.LowNo
This is not clearly defined.
Unclear relation to feedback.
Low validity.
Lysakowski &Walberg, 1982Focus on corrective feedback.HighYes
Unclear relation to feedback in educational situations.
Feedback in connection with test situation.
but not in connection with instruction and teaching.
Low validity.
Kluger and DeNisi, 1996Feedback intervention consistent with Hattie's definition.HighYes
Focus on feedback in educational situations.
High validity.
Witt, Wheeless & Allen, 2006Feedback for the teacher, not students.LowNO
Does the student benefit from the teacher receiving feedback.
Low validity.
Swanson & Lussier, 2001Focus on dynamic assessment.Highnot consistent
Unclear relation to feedback in educational situations.
Low validity.

They conclude,
"...our analysis shows that the transparency is low in two out of five studies, also only one of the five studies is consistently working with a control group design.  
... the study by Kluger and DeNisi (1996), that Hattie (VL, p175) denotes 'the most systematic study addressing the effects of various types of feedback' has an effect size of d = 0.38 - i.e., a much lower impact assessment than the 0.73 ... other than that 32 percent of the surveys that are included in Kluger and DeNisi's study, a negative effect on the learning process - which is moreover contrary to Hattie's assumption that 'almost everything works'. 
Kluger and DeNisi (1996) therefore denote feedback as a two-fold sword that both can lead to the student either learning significantly more or significantly less" (p. 11).
One of their pertinent observations is that many of the studies produce negative effects. They quote Shute (2008) 'Within this large body of feedback research, there are many conflicting findings and no consistent pattern of results.' 

David Didau briefly summarises the Kluger and DeNisi study here.

Professor Dylan Wiliam confirms 40% of studies show feedback has a negative effect.

So there is strong counter to Hattie's disparaging claim that,
“When teachers claim that they are having a positive effect on achievement or when a policy improves achievement, this is almost always a trivial claim: Virtually everything works.  
One only needs a pulse and we can improve achievement” (VL, p. 16).
Prof Wiliam details more problems with the feedback research - most of the studies are on university students and 85% of the feedback is ONE event lasting minutes!!! He then goes further and says that if you compare these different studies on feedback (as Hattie does),
"Your results are meaningless."



Professor Robert Slavin in his blog John Hattie is Wrong, gives a pertinent example of Hattie's use of feedback studies,
"A meta-analysis by Rummel and Feinberg (1988), with a reported effect size of +0.60, is perhaps the most humorous inclusion in the Hattie & Timperley (2007) meta-meta-analysis. It consists entirely of brief lab studies of the degree to which being paid or otherwise reinforced for engaging in an activity that was already intrinsically motivating would reduce subjects’ later participation in that activity. Rummel & Feinberg (1988) reported a positive effect size if subjects later did less of the activity they were paid to do. The reviewers decided to code studies positively if their findings corresponded to the theory (i.e., that feedback and reinforcement reduce later participation in previously favored activities), but in fact their “positive” effect size of +0.60 indicates a negative effect of feedback on performance. 
I could go on (and on), but I think you get the point. Hattie’s meta-meta-analyses grab big numbers from meta-analyses of all kinds with little regard to the meaning or quality of the original studies, or of the meta-analyses."
I checked Rummel and Feinberg (1988) and Prof Slavin is correct, here's an example of some of the studies they used,



The regulatory body in mine and Hattie's jurisdiction, The Victorian Institute of Teaching, publish dismissal proceedings for teachers they deem unfit to teach.
"The evidence of possible serious misconduct or lack of fitness to teach ... was: 
 Inappropriate gift buying, described by students as bribery, such as sweets and other items."
Schulmeister & Loviscach (2014) Critical comments on the study "Making learning visible" (Visible Learning) also detail many problems with Hattie's analysis of feedback. The problem of averaging many different studies on different target groups (teachers and students) using very different feedback mechanisms. For example, the Standley (1996) study is about the impact of music on behavioral interventions. They conclude,
"Only in a very broad sense has this study something to do with feedback; it is behavioristic reinforcement" (p. 9).
I checked the Standley (1996) and their results were interesting to me as I managed a band that played in pubs in Melbourne in the 1990's and the debate was about live music versus DJs or recorded music. But Standley (1996, p. 108) answered that question for us,
"Live music (ES = 1.13) was more effective than recorded music (ES = 0.86)."

Prof Terry Wrigley (2015) in Bullying by Numbers, critiquing the EEF in particular but also Hattie,
"Specifically, on Feedback, the Toolkit provides some more specific references to back up its very general claims, but many of these are over 20 years old and currently unobtainable. Seven more detailed references are given, each with an ‘effect size’, but these range from .97 to .20. Which is to be believed? Summaries follow, in highly technical language, mostly without indicating which stage or subject, what kind of learning, what kind of feedback, which countries the research took place in, and so on. Some of the sources are very critical of particular types of feedback... 
Meta-analyses are used in Medicine to enable researchers to complement the reading of other research, though not to substitute for it; for example, if experiments have been based on small samples, averaging the results can suggest a general trend. 
But the medical literature contains serious warnings against the misuse of meta-analysis. Statisticians are warned not to mix together different treatments, types of patient or outcome measures – the ‘apples and pears’ problem. If the original results differ strongly, they are advised to highlight the difference, not provide a misleading average. This is exactly what has not happened in the Toolkit, which should never have provided an average score for “Feedback” since the word has so many meanings" (p. 6).
Ruiz-Primo & Li (2013) detail the quality issues which we consistently see in Hattie's synthesis,
"A high percentage of papers investigating the impact of feedback did so without using a control group... 
Confounded effects, rarely mentioned in the synthesis and meta-analyses, pose another threat to validity when interpreting results of feedback studies" (p. 218).
"most of the studies do not provide information about the reliability and validity of the instruments used to measure the effects of feedback on the selected outcomes. The validity of feedback studies is threatened by a failure to attend to the technical characteristics of the instruments used to measure learning outcomes... Given these measures with ambiguity in technical soundness, can we fully trust results reported in synthesis and meta-analysis studies?
... there is an issue of ecological validity. For a research study to possess ecological validity and its results to be generalizable, the methods, materials, and setting of the study must sufficiently approximate the real-life situation that is under investigation. Most of the studies reported are laboratory-based or are conducted in classrooms but under artificial conditions (e.g., students were asked to identify unfamiliar uses of familiar objects)...
Furthermore, a high percentage of the studies focus on written feedback, and only a few on oral or other types of feedback, although oral feedback is more frequently observed in teachers’ daily assessment practices (see Hargreaves et al., 2000).
...we argue that formative feedback, when studied in the classroom context, is far more complex than it tends to appear in most studies, syntheses, or meta-analyses. Feedback practice is more than simply giving students feedback orally or in written form with externally or self-generated information and descriptive comments. We argue that feedback that is not used by students to move their learning forward is not formative feedback. We thus suggest that feedback needs to be examined more closely in the classroom setting, which should ultimately contribute to an expanded and more accurate and precise definition" (p. 219).
Some interesting findings,
"Research has made clear that students hardly read teachers’ written feedback or know how to interpret it (Cowie, 2005a, 2005b)" (p. 225).
"most of the publications on formative assessment and feedback include examples of strategies and techniques that teachers can use. Most of them, however, do not provide empirical evidence of the impact of these strategies on student learning; nor do they link them to contextual issues that may affect the effectiveness of the strategies...
there is a lack of studies conducted in real classrooms—the natural setting—where it would be important to see evidence that feedback strategies have substantive impact. Moreover, few studies have focused on feedback over extended periods or on factors that can moderate or mediate the effectiveness of feedback. Therefore, we cannot generalize what we know from the literature to classroom practices...
Rather than persisting with our common belief that feedback is something doable for teachers, we should strive to study formative assessment practices in the classroom, including feedback, to help teachers and students to do better. Given these unanswered questions, we need different and more trustworthy strategies of inquiry to acquire firsthand knowledge about feedback in the classroom context and to systematically study its effects on student learning" (p. 226).
The Focus of Feedback in English Schools:

The poor research studies used as the basis for deciding that "Feedback" is a high impact study may account for the lack of success of focusing on it as an initiative.

Wiliam (2019) notes,
"the EEF’s emphasis on feedback as the single most cost-effective intervention justified 'additional pressures on teachers from inspectors that are ultimately not productive' even though few, if any, of the studies that the EEF included in its review looked at the effects of marking in school."
Cohen (2019) also comments about this lack of success quoting Christodoulou (2016),
"drawing on similar foundational assumptions, takes on the issue of feedback in the context of the English school system. She deals with the puzzling failure of Assessment for Learning. AfL was a government programme for rolling out feedback strategy based on strong evidence from a range of scholarly sources, including experimental evidence. It commanded strong support among policymakers and a great deal of the teaching profession. It was successfully implemented at least to the extent that teachers in England now provide a great deal more feedback than before, and more than teachers in other countries. Yet, the theorised improved student outcomes did not materialise. 
Christodoulou (2016) uses a range of evidence to argue that part of the reason for this is a failure to differentiate formative and summative assessment."
Work Load & Feedback:

Glenn Pearsall, one of the most popular teaching experts in Australia links teacher work load with inefficient feedback practices, in his TER podcast, he said,
"Great feedback which the kid does not act on is a waste of both the kid and the teacher's time!"
David Didau in his excellent blog on feedback looks at the key studies used by Hattie and the EEF and confirms Wiliam's analysis and also shows feedback is complicated and nuanced.

He also pointed out that even though most researchers have feedback as an important teaching strategy, PISA has feedback NEGATIVELY correlated with Science performance.



The Negative Influences

From PISA 2015 Volume 2 (page 228).

Price, Handley, Millar & O'Donovan (2010). Feedback: all that effort, but what is the effect? Confirm feedback is complex and add relationships are important, e.g.,
"Measuring ‘effectiveness’ requires clarity about the purpose of feedback. Unless it is clear what feedback is trying to achieve, its success cannot be judged... 
Although a frequently used term, feedback does not have clarity of meaning. It is a generic term which disguises multiple purposes which are often not explicitly acknowledged. The roles attributed to feedback fall broadly into five, but not entirely delineated discrete, categories: correction, reinforcement, forensic diagnosis, bench-marking and longitudinal development (feed-forward)... 
Accurate measurement of feedback effectiveness is difficult and perhaps impossible. Furthermore, the attempt to measure effectiveness using simple indicators – such as input measures or levels-of-service – runs the risk of producing information which is misleading or invalid and which may lead to inappropriate policy recommendations."
Prof Paul Kirschner details the problems with feedback here.

Michael Pershan writes an excellent and insightful blog on feedback. He basic argument is the evidence is poor and the notion of feedback is too general to be of any help to teachers.

The case against Feedback - here.

No comments:

Post a Comment