Other Issues

Conflict of Interest:

'It is difficult to get a man to understand something, when his salary depends on his not understanding it.' Upton Sinclair
Professor Ewald Terhardt (2011)
'A part of the criticism on Hattie condemns his close links to the New Zealand Government and is suspicious of his own economic interests in the spread of his assessment and training programme (asTTle). Similarly, he is accused of advertising elements of performance-related pay of teachers and he is being criticised for the use of asTTle as the administrative tool for scaling teacher performance. His neglect of social backgrounds, inequality, racism, etc., and issues of school structure is also held against him. This criticism is part of a negative attitude towards standards-based school reform in general. However, there is also criticism concerning Hattie’s conception of teaching and teachers. Hattie is accused of propagating a teacher-centred, highly directive form of classroom teaching, which is characterised essentially by constant performance assessments directed to the students and to teachers' (p434).
Hattie is once again promoting asTTle in his collaboration with Pearson - What Works In Education but fails to divulge his financial interest in the program (p13).

Professor John O'Neill wrote a timely warning, in 2012, about Hattie's influence on Education policy and his financial interest in the solutions proposed:
The 'discourse seeks to portray the public sector as ‘ineffective, unresponsive, sloppy, risk-averse and innovation-resistant’ yet at the same time it promotes celebration of public sector 'heroes' of reform and new kinds of public sector 'excellence'. Relatedly, Mintrom (2000) has written persuasively in the American context, of the way in which ‘policy entrepreneurs’ position themselves politically to champion, shape and benefit from school reform discourses' (p2).
Poulsen (2014) is similarly suspicious of Hattie's economic interests,
'.... Hattie has become so sure in his conclusions that he has organized one (worldwide?) course and consulting company that will spread the knowledge research results and train teachers to teach more effectively... Here he joins a well known American tradition for promptly converting new knowledge into new business.

But there is and will be a difference in an open research concept and a profit-oriented course and consultant project, the last definition must work every time while research constantly strives to falsify its own former truths. The transformation of knowledge into commercial projects, of course, also concern my own work as a former researcher and since consultant and model developer. I have a continuous inner self-critical reflection about me Reasonably accepts and incorporates the thinking and research of others, eg Hattie's' (p5, translated from Danish).
As is Sjøberg (2012) and Rømer (2016),
'... Hattie wants to build an educational position and practice; a project that is enhanced by the fact that Hattie and his consultants are very active in developing and selling educational concepts, for example,Visible Learning +' (p1).
McKnight & Whitburn (2018) in Seven reasons to question the hegemony of Visible Learning continue the debate on conflict of interest,
'The Visible Learning cult is not about teachers and students, but the Visible Learning brand. It is not dialogic, it brooks no argument and is sedimented and corralled by its trademarks and proprietary symbols. As we comply, we wonder if we should; assent and unease intertwined are the defining reactions to neoliberalism’s imperatives. Educators need to be alert to affect here, and to what it may mean... 
Teacher professionalism requires “working with colleagues in collaborative cultures of help and support as a way of using shared expertise to solve the on-going problems of professional practice, rather than engaging in joint work as a motivational device to implement the external mandates of others” (Hargreaves & Goodson, 1996, p. 20). Visible Learning demands the latter, and not coincidentally, also requires the consumption of the artefacts, including the professional development sessions, the books, the websites, and the videos. These almanacs, with their tips for teachers, offer quick fix solutions without addressing the crises of education in an increasingly complex world (Slee, 2011). In the case of the Visible Learning brand, certain teachers become licenced as fans, for example in Hattie’s collected case studies of impact (2016). 
Hattie has effected the shift from intrinsic accountability to extrinsic accountability, negating teachers’ awareness that “professional knowledge is provisional, unfinalizable, culturally produced and historically situated” (Locke, 2015, p. 82). It cannot meaningfully be reduced to a list of strategies' (p12-13).
Professor Gene Glass with 20 other distinguished academics also concurs with John O'Neill in, 50 Myths and Lies That Threaten America's Public Schools: The Real Crisis in Education.
'The mythical failure of public education has been created and perpetuated in large part by political and economic interests that stand to gain from the destruction of the traditional system. There is an intentional misrepresentation of facts through a rapidly expanding variety of organizations and media that reach deep into the psyche of the nation's citizenry. These myths must be debunked. Our method of debunking these myths and lies is to argue against their logic, or to criticize the data supporting the myth, or to present more credible contradictory data' (p4).
Professor Pierre-Jérôme Bergeron in his voicEd interview also talks about Hattie's conflict of interest and Hattie's reluctance to address the details of his critics. Listen here - at 17min 46sec.

Hattie quotes Cohen (1985) 
'New and revolutionary ideas in teaching will tend to be resisted rather than welcomed with open arms, because every successful teacher has a vested intellectual, social, and even financial interest in maintaining the status quo' (p252).
Given Hattie's collaboration with Pearson; who paid Hattie for the intellectual rights to Visual Laboratories and Hattie's financial interest in the solutions provided to schools. This is a remarkable double standard! 

It is disappointing that Hattie once again criticises the easy target - the teacher.

Dr Jonathan Becker, similarly critics Marzano, for his lack of independence, due to his financial arrangement with Promethean in his research.

Nick Rose goes into more detail regarding financial conflicts of interest and research.

Joshua Katz's YouTube presentation went viral regarding financial conflict of interest in Education.



Contradictions & Inconsistencies:


Hattie defines 'influence' as any effect on student achievement. But this is too vague and leads to many contradictions & inconsistencies. Hattie states (preface) 
'The book is not about classroom life, and does not speak of its nuances.'
However, his influences consist of a large number classroom nuances: behaviour, feedback, motivation, ability grouping, worked examples, problem-solving, micro teaching, teacher-student relationships, direct instruction, vocabulary programs, concept mapping, peer tutoring, play programs, time on task, simulations, calculators, computer-assisted instruction, etc.

Also in his preface he states, 

'It is not a book about what cannot be influenced in schools - thus critical discussions about class, poverty, resources in families, health in families, and nutrition are not included.'
Yet he has included these in his rankings:

Home environment, d = 0.57 rank 31
Socioeconomic status, d = 0.57 rank 32
Pre-term birth weight, d = 0.54 rank 38
Parental involvement, d = 0.51 rank 45
Drugs, d = 0.33 rank 81
Positive view of own ethnicity, d = 0.32 rank 82
Family structure, d = 0.17 rank 113
Diet, d = 0.12 rank 123
Welfare  policies, d = -0.12 rank 135

In Hattie's 2012 update of VL he contradicts his 2009 preface,
'I could have written a book about school leaders, about society influences, about policies – and all are worthwhile – but my most immediate attention is more related to teachers and students: the daily life of teachers in preparing, starting, conducting, and evaluating lessons, and the daily life of students involved in learning' (preface).
Blichfeldt (2011) also discusses these contradictions,
'Possible interaction effects between variables such as subjects, age, class, economy, family resources, health and nutrition are not part of the study, and the variables work anyway narrowly represented.'  
Hattie also promotes Bereiter’s model of learning, 
'Knowledge building includes thinking of alternatives, thinking of criticisms, proposing experimental tests, deriving one object from another, proposing a problem, proposing a solution, and criticising the solution' (VL p27). 
'There needs to be a major shift, therefore, from an over-reliance on surface information (the first world) and a misplaced assumption that the goal of education is deep understanding or development of thinking skills (the second world), towards a balance of surface and deep learning leading to students more successfully constructing defensible theories of knowing and reality (the third world)' (p28).
Prof Jérôme Proulx, Critical essay on the work of John Hattie for teaching mathematics: Entrance from the Mathematics Education, explains the contradiction,
'ironically, Hattie self-criticizes implicitly if we rely on his book beginning affirmations, then that it affirms the importance of the three types learning in education... 
So with this comment, Hattie discredits his own work on which it bases itself to decide on what represents the good ways to teach. Indeed, since the studies he has synthesized to draw his conclusions are not going in the sense of what he himself says represent a good teaching, how can he rely on it to draw conclusions about the teaching itself?'
Nilholm, Claes (2017) Is John Hattie in Blue Sword?
'Early in his book, he points out that the school has a broad assignment, but then he builds a model for teaching and learning that deals only with the knowledge mission (or rather on how knowledge performance can be improved)' (p3).

Teacher Training and Experienced Teachers: 


Hattie uses his own research on 65 teachers comparing NBC with Non-NBC teachers and reports this in the last chapter of VL. But, Hattie is using the research for a very different purpose, to demonstrate the difference between expert versus experienced teachers. Hattie makes the arbitrary judgment that NBC certified teachers are 'Experienced Experts' while Non-NBC teachers are 'Experienced'. He does not use student achievement but rather arbitrary criteria as displayed in the graph below.

Podgursky (2001) in his critique, describes them as 'nebulous standards'. Podgursky is also rather suspicious of Hattie's rationale for not using student achievement,

'It is not too much of an exaggeration to state that such measures have been cited as a cause of all of the nation’s considerable problems in educating our youth. . . . It is in their uses as measures of individual teacher effectiveness and quality that such measures are particularly inappropriate' (p2).
Hattie concludes that expert teachers (NBC) outperform Non-NBC teachers on almost every criterion (p260).



Harris and Sass (2009) report that the National Board for Professional Teaching Standards (NBPTS) who administer the NBC generate around $600 million in fees each year (p4). Harris and Sass's much larger study 'covering the universe of teachers and students in Florida for a four -year span' (p1) contradict Hattie's conclusion, 'we find relatively little support for NBC as a signal of teacher effectiveness' (p25).

It is interesting that much of Hattie's consulting work to schools involves measuring teachers on the arbitrary categories listed on the graph, a significant omission is Teacher Subject Knowledge.

Yet, using the same type of research, e.g. Hacke (2010), comparing NBC with Non-NBC teachers, he uses the low effect size to conclude that Teacher Education is a DISASTER. See Hattie's slides from his 2008 Nuthall lecture.




The whole may be more than the sum of the parts:


Bruce Springsteen says in his tribute to U2 using their song Vertigo - 'Uno, dos, tres, catorce' - translated 1,2,3,14! - the correct maths of rock-n-roll, and maybe for classrooms too.

Also, Hattie's rankings distract us from some of the more useful teaching initiatives. For example, Professor Jo Boaler and Charles Lovitt, who focus on combining a number of influences together: problem-based learning, simulations, time on task, inquiry, and visual methods. Yet, Hattie rates these individual influences very lowly and ignores the major effect and usual classroom dynamic of combining a number of influences together.


Hattie's Aim:


Hattie uses the REDUCTIONIST approach by attempting to break down the complexity of teaching into simple discrete categories or influences.

Although, Nick Rose has alerted me to another form of reductionism defined by Daniel Dennett - 'Greedy Reductionism' which occurs when,
'in their eagerness for a bargain, in their zeal to explain too much too fast, scientists and philosophers ... underestimate the complexities, trying to skip whole layers or levels of theory in their rush to fasten everything securely and neatly to the foundation.'
I think this latter definition better describes Hattie's methodology.

Additionally, Hattie only uses univariate analysis but complex systems require multivariate analysis. As Prof Jordan Petersen states,

No social scientist worth their salt uses univariate analysis.




Allerup (2015) in 'Hattie's use of effect size as the ranking of educational efforts',
'it is well known that analyses in the educational world often require the involvement of more dimensional (multivariate) analyses' (p8).
Hattie states: 
'The model I will present ... may well be speculative, but it aims to provide high levels of explanation for the many influences on student achievement as well as offer a platform to compare these influences in a meaningful way... I must emphasise that these ideas are clearly speculative' (p4).
Hattie uses the Effect Size (d) statistic to interpret, compare and rank educational influences.

The effect size is supposed to measure the change in student achievement; a controversial topic in and of itself (there are many totally different concepts of what achievement is - see here). 


In addition, surprisingly, Hattie includes many studies that did not measure achievement at all, but rather something else e.g., IQ, hyperactivity, behavior, and engagement.

Also, Hattie claims the studies used were of robust experimental design (p8). However, a number of peer reviews have shown that he used studies with the much poorer design of simple correlation, which he then converts into an effect size (often incorrectly! see -Wecker et al (2016, p27)). Hattie then ranks these effect sizes from largest to smallest.

The disparate measures of student achievement lead to the classic problem of comparing apples to oranges and has caused many scholars to question the validity and reliability of Hattie's effect sizes and rankings, e.g., Higgins and Simpson (2011, p199):

'We argue the process by which this number has been derived has rendered it practically meaningless.'
Wecker et al (2016)
'The reconstruction of Hattie's approach in detail using examples thus shows that the methodological standards to be applied are violated at all levels of the analysis. As some of the examples given here show, Hattie's values are sometimes many times too high or low. In order to be able to estimate the impact of these deficiencies on the analysis results, the full analyzes would have to be carried out correctly, but for which, as already stated, often necessary information is missing. However, the amount and scope of these shortcomings alone give cause for justified doubts about the resilience of Hattie's results' (p31).
'the methodological claims arising from Hattie's approach, and the overall appropriateness of this approach suggest a fairly clear conclusion: a large proportion of the findings are subject to reasonable doubt' (p35).
The Common Language Effect Size (CLE) is a probability statistic which usually interprets the effect size. However, three peer reviews showed Hattie calculated all CLE's incorrectly (he calculated probabilities that were negative and greater than 1!). As a result, he now claims the CLE statistic is not important and he focuses on an interpretation that an effect size d = 0.4 is the hinge point, claiming this is equivalent to a year’s progress. Although, there are significant problems with this interpretation.

Although Hattie seems to have another highly doubtful interpretation of probability in a recent interview with Hanne Knudsen (2017) John Hattie: I’m a statistician, I’m not a theoretician Hattie states,
'The research studies in VL offer probability statements – there are higher probabilities of success when implementing the influences nearer the top than bottom of the chart' (p7).

Hattie's Interpretation of the Meta-analyses:

'No methodology of science is considered respectable in the research community when experiment design is unable to control a rampant blooming of subjectivity' Myburgh (2016, p10).
Meta-analysis, as a methodology, has been widely criticised, for not representing the original studies faithfully.

Yet, Hattie takes this interpretation problem to another level as his methodology is META - meta-analysis or MEGA- analysis (Snook et al, 2009).
'the methodology used (by Hattie), neglects the original theory that drives the primary studies it seeks to review' Myburgh (2016, p4).
Wecker et al (2016, p35) are also critical of this META - meta-analysis methodology,
'Hattie's work makes clear, a single meta-analysis cannot conclusively answer the question of the effectiveness of an influencing factor anyway. Therefore, meta-analyses should be updated when a significant number of additional primary studies have been added, but not in a second-stage meta-analysis, as in Hattie's work, but as a first-stage meta-analysis based on all existing primary studies.'
Myburgh (2016, p19) but,
'Hattie assures the research community that he has arrived at sound conclusions based on his confidence that his mega-analysis of meta-analyses consists of quality studies, that the effect sizes faithfully represent a review of the original data and that he adequately explores moderators.'
Yet Hattie uses a wide range of meta-analyses which use TOTALLY different experimental designs, on different groups of people (university students, doctors, nurses, tradesmen, and sometimes high school students!), with vastly different measures of student achievement or often no measure of achievement at all!

As Professor Peter Blatchford points out about Hattie's VL,
'it is odd that so much weight is attached to studies that don't directly address the topic on which the conclusions are made' (p13).
Wecker et al (2016, p28) Hattie mistakenly include studies that do not measure academic performance.

DuPaul & Eckert (2012) comment on the broader issues of meta-analysis that have been covered in other sections. Regarding proper experiments:  
'randomised control trials are considered the scientific gold standard for evaluating treatment effects ... the lack of such studies in the school-based intervention literature is a significant concern' (p408).
Regarding comparing different studies: 
'It is difficult to compare effect size estimates across research design types. Not only are effect size estimates calculated differently for each research design, but there appear to be differences in the types of outcome measures used across designs' (p408).

The Problem of Breaking Down the Complexity of Teaching into Simple Categories, Influences or 'Buckets':

'The partitioning of teaching into smallest measurable units, a piecemeal articulation of how to improve student learning, is not too removed from the work of Taylor over 100 years ago. Despite its voluminous and fast expanding literatures, educational administration remains rooted to the same problems of last century' Eacott (2017, p10).
McKnight & Whitburn (2018) in Seven reasons to question the hegemony of Visible Learning,
'Visible Learning, despite all its complicated effect sizes, equations and figures, is ultimately too simple a mantra. To assume that teachers can see what students see is both ableist and sexist (predicated on masculinist vision), and also arrogant. To reduce teaching to what works is to misunderstand it. To define growth only as getting “to the next level” (Hattie, 2017) is to narrow the potential meanings of success' (p17-18).
Professor Robert Sapolsky in his course 'Introduction to Human Behavioral Biology' talks of complexity (see 47:20 - 48:30).


What about the sum of the Parts?:


Bruce Springsteen inducts U2 in the hall of fame:
'Uno, dos, tres, catorce. That translates as one, two, three, fourteen. That is the correct math for a rock and roll band. For in art and love and rock and roll, the whole had better equal much more than the sum of its parts.'


I think it highly likely teaching is also the sum of its parts!

Prof John O'Neill agrees Material fallacies of education research evidence and public policy advice (p5),
'real classrooms are all about interactions among variables, and their effects. The author implicitly recognises this when he states that ‘a review of non-metaanalytic studies could lead to a richer and more nuanced statement of the evidence’ (p. 255). Similarly, he acknowledges that when different teaching methods or strategies are used together their combined effects may be much greater than their comparatively small effect measured in isolation (p. 245).'


'Garbage in, Gospel out' Dr Gary Smith (2014)

What has often been missed is that Hattie prefaced his book with significant doubt 
'I must emphasise these are clearly speculative' (p4).
Yethis rankings have taken on 'gospel' status due to: the major promotion by politicians, administrators and principals (it's in their interest, e.g. class size), very little contesting by teachers (they don't have the time, or who is going to challenge the principal?) and limited access to scholarly critiques - see Gary Davies excellent blog on this.

'Materialists and madmen never have doubts' G. K. Chesterton

Interestingly, his reservation has changed to an authority and certainty that is at odds with the caution that ALL of the authors of his studies recommend, e.g., class size and ability group. Caution due to lack of quality studies, inability to control variables, major differences in how achievement is measured and the many confounding variables. Also, there is significant critique by scholars who identify the many errors that Hattie makes; from major calculation errors and excessive inference to misrepresenting studies, e.g., Higgins and Simpson (2011)Wecker et al (2016).

Also, in his presentations, he describes many of these low ranked influences as DISASTERS! 

This seems to DEFY widespread teacher experience.


PEDAGOGY

Nepper Larsen, Steen (2014) Know thy impact – blind spots in John Hattie’s evidence credo.
'future expectations that Hattie evidence-based credos and meta-studies can solve all the problems in learning institutions forget that not very many decades ago the absolute buzz-words (at least in major parts of Denmark and Germany) were "experiential learning," "experiential pedagogy," "critical reform pedagogy," and "pedagogy of resistance." Why do we ‘forget’ to talk with dignity and curiosity about the teachers’ and students’ experience (i.e. Erfahrung in German, erfaring in Danish), meaning "elaborated experience," a differentiation one cannot make in English? What has become of the enlightened citizen? Why and how is evidence ‘imperialising’ the right to define without any attempt to inherit and renew the past’s educational vocabulary? 
We have come to live in a time without profound historical awareness in which we must acknowledge that Hattie, as the world’s most influential and successful educational thinker, does not contribute to the renewal of the pedagogical vocabulary' (p5).
Is Hattie’s Evidence Stronger than Other Researchers or Widespread Teacher Experience?

A summary of the major issues scholars have found with Hattie's work (details on the page links on the right):
  • Hattie misrepresents studies e.g. peer evaluation in 'self-report' and studies on emotionally disturbed students are included in 'reducing disruptive behavior'.
  • Hattie often reports the opposite conclusion to that of the actual authors of the studies he reports on, e.g. 'class-size', 'teacher training', 'diet' and 'reducing disruptive behavior'.
  • Hattie jumbled together and averaged the effect sizes of different measurements of student achievement, teacher tests, IQ, standardised tests and physical tests like rallying a tennis ball against the wall.
  • Hattie jumbled together and averaged effect sizes for studies that do not use achievement but something else, e.g. hyperactivity in the Diet study, i.e., he uses these as proxies for achievement, which he advised us NOT to do in his 2005 ACER presentation.
  • The studies are mostly about non-school or abnormal populations, e.g., doctors, nurses, university students, tradesmen, pre-school children, and 'emotionally/behaviorally' disturbed students.
  • The US Education Dept benchmark effect sizes per year level, indicate another layer of complexity in interpreting effect sizes - studies need to control for age of students as well as the time over which the study runs. Hattie does not do this.
  • Related to the US benchmarks is Hattie's use of d = 0.40 as the hinge point of judgments about what is a 'good' or 'bad' influence. The U.S. benchmarks show this is misleading.
  • Most of the studies Hattie uses are not high quality randomised controlled studies but the much, much poorer quality correlation studies.
  • Most scholars are cautious/doubtful in attributing causation to separate influences in the precise surgical way in which Hattie infers. This is because of the unknown effect of outside influences or confounds.
  • Hattie makes a number of major calculation errors, e.g., negative probabilities.

No comments:

Post a Comment