Behavior

Decreasing Disruptive Behavior:

Hattie used 3 meta-analyses to get his average Effect Size (ES) = 0.34. 


Note: The negative probability value (CLE) of -49% is a significant error throughout the book which Hattie has admitted to making. 

This small ES is below Hattie's hinge point of 0.40. Hattie often described influences with small ES using derogatory terms like "Disasters" and "also rans"

Reid et al (2004) appear to contradict the other two studies by showing that decreasing disruptive behavior actually reduced student achievement, with an ES = -0.69.

How can 'reducing disruptive behavior' decrease achievement to that extent?

Reid et al compared the achievement of students labeled with 'emotional/behavioral' disturbance (EBD) with a 'normative' group. They used a range of measures to determine EBD, e.g., students who are currently in programs for severe behavior problems e.g., psychiatric hospitals (p. 132).

The ES was calculated by using (EBD achievement) - (Normative achievement) / SD (p. 133).

The negative ES indicates the EBD group performed well below the normative group. The authors conclude: 
"students with EBD performed at a significantly lower level than did students without those disabilities across academic subjects and settings" (p. 130).
Hattie misrepresents this study, as it does not investigate the strategy of 'decreasing disruptive behavior'. Additionally, the sample consists of 'abnormal' students. Consequently, this meta-analysis should not be included in Hattie's work.

Hattie's lack of consistency in interpreting ES is a major problem, e.g., using Frazier et al (2007) (see Behavior Other Categories) students were identified as having ADHD, and proxies for achievement were used: GPA, class, and parent rating, etc (p. 51).

The ADHD group was the control group and the normative group was the experimental group. So ES was calculated using (Normative - ADHD) getting a large positive result.


This is the REVERSE of Reid et al (2004) study which calculated ES using (EBD - Normative) getting a large negative result.


Simpson (2017) identified this problem,
"the experimental condition in some studies and meta-analyses is the comparison condition in others" (p. 455).
Skiba et al. (1985) investigated reinforcement and feedback as strategies for reducing behavioral problems. They measured indicators of behavior, like noncompliance, off-task, and withdrawal; they did not measure achievement.

They concluded,
"Results indicated that both reinforcement and feedback type procedures are highly effective in the remediation of classroom behavior problems across a variety of behaviors, settings, and administrative arrangements" (p. 472).
So the ES is an indicator of behavior NOT achievement.

Stage et al (1997) also measured disruptive behavior, not achievement,
"the treated students reduced their disruptive behavior compared to nontreated students" (from abstract).
Update:

Hattie consistently referred to these three meta-analyses up to Feb 2023.


Also, Hattie changed numerical ranking to a color code category, eg in June 2019: 


However, at some point in 2023, Hattie removed Reid et al.


As a result, the ES has skyrocketed from 0.34 to 0.86.

A key tenet of the scientific method is reliability, this simple analysis demonstrates how unreliable Hattie's rankings are.

2 comments:

  1. Hattie's entire approach of creating a league table of effect sizes is totally flawed. There seems to be a growing acceptance of this in education - effect sizes for different types of studies are simply not comparable, they cannot be ranked in the way Hattie would like us to believe. Typically the higher quality the study the smaller the effect size, the poorer quality the study, the larger the effect size. Good on you for dissecting this lot though.

    ReplyDelete
    Replies
    1. THanks Derek, I totally agree with you. The more i read the studies the more misrepresentation i see. It is amazing this research is used to decide educational policy.

      Delete