Decreasing Disruptive Behavior - ES = 0.34 (Rank = 80)
Classroom Management - ES = 0.52 (Rank = 42)
Classroom Cohesion - ES = 0.53 (Rank = 39)
Classroom Behavioral - ES = 0.80 (Rank = 6)
There is a lot of overlap in these categories which is a problem concerning confounding variables. Teacher experience would suggest that improving behavior would increase achievement and some of these results verify this. However, the result for 'decreasing disruptive behavior' is below Hattie's hinge point d = 0.40 and Hattie often used polemic like "Disasters" to describe these.
Decreasing Disruptive behavior:
Hattie used 3 meta-analyses to get his average ES = 0.34.
Note: the negative probability value (CLE) -49% is a major mistake now admitted by Hattie.
Reid et al. (2004) seem to contradict the other 2 studies by indicating that decreasing disruptive behavior decreased student achievement as ES = - 0.69!
How can 'reducing disruptive behavior' decrease achievement to that extent?
Reid, et al., compared the achievement of students labeled with 'emotional/behavioral' disturbance (EBD) with a 'normative' group. They used a range of measures to determine EBD, e.g., students who are currently in programs for severe behavior problems e.g., psychiatric hospitals (p. 132).
The effect size was calculated by using (EBD achievement) - (Normative achievement) / SD (p. 133).
The negative effect size indicates the EBD group performed well below the normative group. The authors conclude:
Around 2020 Hattie removed this study and recalculated the average of the 2 remaining studies, ES = 0.86
Hattie's lack of consistency in interpreting effect sizes is a major problem, e.g., using Frazier et al. (2007) (see below) students were identified as having ADHD, and proxies for achievement were used: GPA, class and parent rating, etc (p. 51).
The ADHD group was the control group and the normative group was the experimental group. So effect size was calculated using (Normative - ADHD) getting a large positive result.
This is the REVERSE of Reid et al. (2004) study which calculated effect size using (EBD - Normative) getting a large negative result.
Simpson identifies this problem in 'The misdirection of public policy: comparing and combining standardised effect sizes',
They concluded,
Stage et al (1997) also measured disruptive behavior, not achievement,
Simpson summarises,
Hattie used 3 meta-analyses to get his average ES = 0.53.
Evans & Dion (1991) found a large effect size relating cohesion to group performance. However, the studies were mostly on small sports teams and military units. Also, they state performance criteria for these types of groups are simple, e.g., win/loss record of a sports team. In contrast, the performance criteria for normal work groups are not easily identified (p. 696).
Hattie used 3 meta-analyses to get his average ES = 0.80.
Bender & Smith (1990) did not measure classroom behavior as an influence on achievement, as Hattie implies, rather, it compares learning-disabled students with non-disabled students on several behavioral measures. They conclude,
DuPaul & Eckert (2012) [note: I have not been able to find their original 1997 study but have found this updated study] The authors state,
The ADHD group was the control group and the normative group was the experimental group. This was done to get positive effect sizes (p. 51). So the result d = 0.71 means the ADHD group underperformed the normal group.
Note: Reid et al. (2004) above reversed the calculation getting a negative result.
They summarise, that there is a moderate to large discrepancy in academic achievement between individuals with ADHD and those without ADHD (p. 59).
These 3 studies simply compared students with ADHD or a learning disability with normal students and found that they performed less in achievement.
How can 'reducing disruptive behavior' decrease achievement to that extent?
Reid, et al., compared the achievement of students labeled with 'emotional/behavioral' disturbance (EBD) with a 'normative' group. They used a range of measures to determine EBD, e.g., students who are currently in programs for severe behavior problems e.g., psychiatric hospitals (p. 132).
The effect size was calculated by using (EBD achievement) - (Normative achievement) / SD (p. 133).
The negative effect size indicates the EBD group performed well below the normative group. The authors conclude:
"students with EBD performed at a significantly lower level than did students without those disabilities across academic subjects and settings" (p. 130).Hattie misrepresents this study, as it does not investigate 'decreasing disruptive behavior' as a teaching strategy. Additionally, the sample consists of 'abnormal' students. Consequently, this meta-analysis should not be included in Hattie's work.
Around 2020 Hattie removed this study and recalculated the average of the 2 remaining studies, ES = 0.86
Hattie's lack of consistency in interpreting effect sizes is a major problem, e.g., using Frazier et al. (2007) (see below) students were identified as having ADHD, and proxies for achievement were used: GPA, class and parent rating, etc (p. 51).
The ADHD group was the control group and the normative group was the experimental group. So effect size was calculated using (Normative - ADHD) getting a large positive result.
This is the REVERSE of Reid et al. (2004) study which calculated effect size using (EBD - Normative) getting a large negative result.
Simpson identifies this problem in 'The misdirection of public policy: comparing and combining standardised effect sizes',
"the experimental condition in some studies and meta-analyses is the comparison condition in others" (p. 455).Skiba, et al. (1985) investigated reinforcement and feedback as strategies for reducing behavioral problems. They measured indicators of behavior, like noncompliance, off-task, and withdrawal; they did not measure achievement.
They concluded,
"Results indicated that both reinforcement and feedback type procedures are highly effective in the remediation of classroom behavior problems across a variety of behaviors, settings, and administrative arrangements" (p. 472).So the effect size is an indicator of behavior NOT achievement.
Stage et al (1997) also measured disruptive behavior, not achievement,
"the treated students reduced their disruptive behavior compared to nontreated students" (from abstract).A key tenet of the scientific method is reliability, this simple analysis demonstrates how unreliable Hattie's rankings are.
Simpson summarises,
"Using unequal comparisons or using unspecified ones makes it impossible to compare or combine effect sizes meaningfully" (p. 455).
"As such, using these ranked meta-meta-analyses to drive educational policy is misguided" (p. 451).
Classroom Management:
Hattie used one study, Marzano's (2003) "Classroom Management That Works," to obtain an effect size of 0.52.
The study does measure achievement and aims to have randomized control and experimental groups, so the result seems worthwhile.
Classroom Cohesion:
Hattie used 3 meta-analyses to get his average ES = 0.53.
Haertel et al. (1980) stated,
'The socio-psychological environment is typically measured by asking students to agree on a three-to-five point scale to such items as "The students enjoy their work in the class" and "The goals of the class are clear." The purpose of the study is to estimate the magnitude of this relationship between learning and the environment of the classroom, and the relation of its variability across grades, subject areas and aspects of the learning environment' (p. 113).
'Learning outcomes and gains, including student achievement, performance and self-concept, were found to be positively associated with student perceived Cohesiveness, Satisfaction, Task Difficulty, Formality, Goal Direction, Democracy and Material Environment. Negative associations were found with Friction, Cliqueness, Apathy and Dis-organisation' (p. 114).
They warn however that, 'given the correlational basis of much of the research, however rigorously controlled by conventional statistical methods, the next steps in the research should emphasise continued true experimentation' (p. 114).
Evans & Dion (1991) found a large effect size relating cohesion to group performance. However, the studies were mostly on small sports teams and military units. Also, they state performance criteria for these types of groups are simple, e.g., win/loss record of a sports team. In contrast, the performance criteria for normal work groups are not easily identified (p. 696).
They summarise,
"Given the nature of the studies used here, caution is suggested in generalising these results to 'real' work groups" (p. 690).
Mullen & Copper (1994) Once again studies were mostly on small military groups and sports teams. They used many of the studies that Evans & Dion used, thus introducing bias into Hattie's work. They isolated the factor of 'commitment to task' as the most influential aspect of cohesion rather than interpersonal attraction or group pride (p. 210).
They conclude, that the effect was larger in small groups compared to large groups and was larger in correlation studies compared to experimental studies (p. 210).
The relevance of these studies to classrooms is questionable given they are about small sports and military groups.
Classroom Behavioral:
Hattie used 3 meta-analyses to get his average ES = 0.80.
Mullen & Copper (1994) from 'Classroom Cohesion' state the general rule for meta-analyses - subjects should not be sampled from abnormal populations (p. 215). Yet, all of the following studies are from abnormal populations -students diagnosed with ADHD or a learning disability.
"Results showed that both methodologically strong and weak studies demonstrated significant behavioural deficits of children with learning disabilities compared to their non-disabled peers in each of five overall areas: on-task behaviour, off-task behaviour, conduct disorders, distractibility, and shy/withdrawn behaviour. Both observational and teacher rating data demonstrated these differences. Effect sizes for both groups of studies seemed to cluster around 1 standard deviation, suggesting noticeable and educationally significant impairment in the behaviour of children with disabilities" (p. 298).They caution,
"that drastic increases of handicapped students in certain mainstream classes may result in a more negative classroom climate, as mainstream teachers attempt to deal with increased behaviour problems" (p. 305).Below is the summary table of their results (p. 301), I can not find any achievement measures:
DuPaul & Eckert (2012) [note: I have not been able to find their original 1997 study but have found this updated study] The authors state,
"The purpose of the present meta-analysis is to provide a quantitative review of school-based intervention studies for students with ADHD" (p. 389).They conclude,
"The results of this meta-analysis indicate that school-based interventions for students with ADHD yield moderate to large effects for both behavioural and academic outcomes" (p. 401).Frazier et al. (2007) again identified participants as having ADHD and proxies for achievement were used: GPA, class ranking, parent/teacher rating, etc (p. 51).
The ADHD group was the control group and the normative group was the experimental group. This was done to get positive effect sizes (p. 51). So the result d = 0.71 means the ADHD group underperformed the normal group.
Note: Reid et al. (2004) above reversed the calculation getting a negative result.
They summarise, that there is a moderate to large discrepancy in academic achievement between individuals with ADHD and those without ADHD (p. 59).
These 3 studies simply compared students with ADHD or a learning disability with normal students and found that they performed less in achievement.
Hattie's entire approach of creating a league table of effect sizes is totally flawed. There seems to be a growing acceptance of this in education - effect sizes for different types of studies are simply not comparable, they cannot be ranked in the way Hattie would like us to believe. Typically the higher quality the study the smaller the effect size, the poorer quality the study, the larger the effect size. Good on you for dissecting this lot though.
ReplyDeleteTHanks Derek, I totally agree with you. The more i read the studies the more misrepresentation i see. It is amazing this research is used to decide educational policy.
Delete