Other Issues

Hattie's Aim:

Hattie uses the REDUCTIONIST approach by attempting to break down the complexity of teaching into simple discrete categories or influences.

Although, Nick Rose has alerted me to another form of reductionism defined by Daniel Dennett - 'Greedy Reductionism' which occurs when,
'in their eagerness for a bargain, in their zeal to explain too much too fast, scientists and philosophers ... underestimate the complexities, trying to skip whole layers or levels of theory in their rush to fasten everything securely and neatly to the foundation.'
I think this latter definition better describes Hattie's methodology.

Additionally, Hattie only uses univariate analysis but complex systems require multivariate analysis. As Prof Jordan Peterson states,

No social scientist worth their salt uses univariate analysis.




Allerup (2015) in 'Hattie's use of effect size as the ranking of educational efforts',
'it is well known that analyses in the educational world often require the involvement of more dimensional (multivariate) analyses' (p8).
Hattie states: 
'The model I will present ... may well be speculative, but it aims to provide high levels of explanation for the many influences on student achievement as well as offer a platform to compare these influences in a meaningful way... I must emphasise that these ideas are clearly speculative' (p4).
Hattie uses the Effect Size (d) statistic to interpret, compare and rank educational influences.

The effect size is supposed to measure the change in student achievement; a controversial topic in and of itself (there are many totally different concepts of what achievement is - see here). 


In addition, surprisingly, Hattie includes many studies that did not measure achievement at all, but rather something else e.g., IQ, hyperactivity, behavior, and engagement.

Also, Hattie claims the studies used were of robust experimental design (p8). However, a number of peer reviews have shown that he used studies with the much poorer design of simple correlation, which he then converts into an effect size (often incorrectly! see -Wecker et al (2016, p27)). Hattie then ranks these effect sizes from largest to smallest.


The disparate measures of student achievement lead to the classic problem of comparing apples to oranges and has caused many scholars to question the validity and reliability of Hattie's effect sizes and rankings, e.g., Wecker et al (2016),
'The reconstruction of Hattie's approach in detail using examples thus shows that the methodological standards to be applied are violated at all levels of the analysis. As some of the examples given here show, Hattie's values are sometimes many times too high or low. In order to be able to estimate the impact of these deficiencies on the analysis results, the full analyzes would have to be carried out correctly, but for which, as already stated, often necessary information is missing. However, the amount and scope of these shortcomings alone give cause for justified doubts about the resilience of Hattie's results' (p31).
'the methodological claims arising from Hattie's approach, and the overall appropriateness of this approach suggest a fairly clear conclusion: a large proportion of the findings are subject to reasonable doubt' (p35).
The Problem of Breaking Down the Complexity of Teaching into Simple Categories, Influences or 'Buckets':
'The partitioning of teaching into smallest measurable units, a piecemeal articulation of how to improve student learning, is not too removed from the work of Taylor over 100 years ago. Despite its voluminous and fast expanding literatures, educational administration remains rooted to the same problems of last century' Eacott (2017, p10).
Eacott (2018) goes further,
'using a piecemeal reduction in tasks to their smallest components as Taylor did – and the execution of that practice with the greatest impact. Herein is the difference between Hattie’s work and other reform work in Australia in the previous decades. While initiatives such as Productive Pedagogies and Quality Teaching sought to provide teachers and school leaders with resources to reflect on and develop their craft, Hattie’s list of practices by effect size tells educators what to do to get maximum return. The difference is subtle, but important. 
Hattie seeks to tell teachers what to do on the basis of what is presented as robust scientific evidence (irrespective of its critiques and failure to systemically refute counter claims). 
To that end, the proliferation of brand Hattie throughout the Australian education system is akin to the spread of Taylorism across the U.S.A. school system in the early twentieth century and therefore consistent with Callahan’s claim of a tragedy. This is not to deny the absence of critique, but the at-scale adoption of professional associations (e.g. through keynotes, on-selling of books/products and promotion of workshops), policymakers (e.g. mobilizing the language of ‘Hattie says’ or ‘research from Hattie says’), and school systems (e.g. funding professional learning based on Visible Learning, and the identification of Hattie schools) is evidence of a largely uncritical adoption of the work when no dissenters or counter arguments are included' (p6).
McKnight & Whitburn (2018) in Seven reasons to question the hegemony of Visible Learning,
'Visible Learning, despite all its complicated effect sizes, equations and figures, is ultimately too simple a mantra. To assume that teachers can see what students see is both ableist and sexist (predicated on masculinist vision), and also arrogant. To reduce teaching to what works is to misunderstand it. To define growth only as getting “to the next level” (Hattie, 2017) is to narrow the potential meanings of success' (p17-18).
Professor Robert Sapolsky in his course 'Introduction to Human Behavioral Biology' talks of complexity (see 47:20 - 48:30).




The Sum of the Parts?

Bruce Springsteen inducts U2 in the hall of fame:
'Uno, dos, tres, catorce. That translates as one, two, three, fourteen. That is the correct math for a rock and roll band. For in art and love and rock and roll, the whole had better equal much more than the sum of its parts.'


I think it highly likely teaching is also the sum of its parts!

Prof John O'Neill agrees Material fallacies of education research evidence and public policy advice (p5),
'real classrooms are all about interactions among variables, and their effects. The author implicitly recognises this when he states that ‘a review of non-metaanalytic studies could lead to a richer and more nuanced statement of the evidence’ (p. 255). Similarly, he acknowledges that when different teaching methods or strategies are used together their combined effects may be much greater than their comparatively small effect measured in isolation' (p245).
Terry Wrigley (2018) in his critique of Hattie and the EEF,
'These forces and structures sit in various strata of reality, and through their energy, interactions and engagements with the environment, new possibilities of emergence arise: unlike the multiple ‘effect sizes’ of meta-meta-analysis, the sum can be more than the parts and qualitative change can arise' (p373).
Hattie's Interpretation of the Meta-analyses:
'No methodology of science is considered respectable in the research community when experiment design is unable to control a rampant blooming of subjectivity' Myburgh (2016, p10).
Meta-analysis, as a methodology, has been widely criticised, for not representing the original studies faithfully.

Yet, Hattie takes this interpretation problem to another level as his methodology is META - meta-analysis or MEGA- analysis (Snook et al, 2009).
'the methodology used (by Hattie), neglects the original theory that drives the primary studies it seeks to review' Myburgh (2016, p4).
Wecker et al (2016, p35) are also critical of this META - meta-analysis methodology,
'Hattie's work makes clear, a single meta-analysis cannot conclusively answer the question of the effectiveness of an influencing factor anyway. Therefore, meta-analyses should be updated when a significant number of additional primary studies have been added, but not in a second-stage meta-analysis, as in Hattie's work, but as a first-stage meta-analysis based on all existing primary studies.'
Myburgh (2016, p19) but,
'Hattie assures the research community that he has arrived at sound conclusions based on his confidence that his mega-analysis of meta-analyses consists of quality studies, that the effect sizes faithfully represent a review of the original data and that he adequately explores moderators.'
Yet Hattie uses a wide range of meta-analyses which use TOTALLY different experimental designs, on different groups of people (university students, doctors, nurses, tradesmen, and sometimes high school students!), with vastly different measures of student achievement or often no measure of achievement at all!

As Professor Peter Blatchford points out about Hattie's VL,
'it is odd that so much weight is attached to studies that don't directly address the topic on which the conclusions are made' (p13).
Wecker et al (2016, p28) Hattie mistakenly include studies that do not measure academic performance.

DuPaul & Eckert (2012) comment on the broader issues of meta-analysis that have been covered in other sections. Regarding proper experiments:  
'randomised control trials are considered the scientific gold standard for evaluating treatment effects ... the lack of such studies in the school-based intervention literature is a significant concern' (p408).
Regarding comparing different studies: 
'It is difficult to compare effect size estimates across research design types. Not only are effect size estimates calculated differently for each research design, but there appear to be differences in the types of outcome measures used across designs' (p408).
Conflict of Interest:
'It is difficult to get a man to understand something, when his salary depends on his not understanding it.' Upton Sinclair
Professor Ewald Terhardt (2011)
'A part of the criticism on Hattie condemns his close links to the New Zealand Government and is suspicious of his own economic interests in the spread of his assessment and training programme (asTTle). Similarly, he is accused of advertising elements of performance-related pay of teachers and he is being criticised for the use of asTTle as the administrative tool for scaling teacher performance. His neglect of social backgrounds, inequality, racism, etc., and issues of school structure is also held against him. This criticism is part of a negative attitude towards standards-based school reform in general. However, there is also criticism concerning Hattie’s conception of teaching and teachers. Hattie is accused of propagating a teacher-centred, highly directive form of classroom teaching, which is characterised essentially by constant performance assessments directed to the students and to teachers' (p434).
Hattie is once again promoting asTTle in his collaboration with Pearson - What Works In Education but fails to divulge his financial interest in the program (p13).

Professor John O'Neill wrote a timely warning, in 2012, about Hattie's influence on Education policy and his financial interest in the solutions proposed:
The 'discourse seeks to portray the public sector as ‘ineffective, unresponsive, sloppy, risk-averse and innovation-resistant’ yet at the same time it promotes celebration of public sector 'heroes' of reform and new kinds of public sector 'excellence'. Relatedly, Mintrom (2000) has written persuasively in the American context, of the way in which ‘policy entrepreneurs’ position themselves politically to champion, shape and benefit from school reform discourses' (p2).
Poulsen (2014) is similarly suspicious of Hattie's economic interests,
'.... Hattie has become so sure in his conclusions that he has organized one (worldwide?) course and consulting company that will spread the knowledge research results and train teachers to teach more effectively... Here he joins a well known American tradition for promptly converting new knowledge into new business.

But there is and will be a difference in an open research concept and a profit-oriented course and consultant project, the last definition must work every time while research constantly strives to falsify its own former truths' (p5, translated from Danish).
As is Sjøberg (2012) and Rømer (2016),
'... Hattie wants to build an educational position and practice; a project that is enhanced by the fact that Hattie and his consultants are very active in developing and selling educational concepts, for example,Visible Learning plus' (p1).
McKnight & Whitburn (2018) in Seven reasons to question the hegemony of Visible Learning continue the debate on conflict of interest,
'The Visible Learning cult is not about teachers and students, but the Visible Learning brand. It is not dialogic, it brooks no argument and is sedimented and corralled by its trademarks and proprietary symbols. As we comply, we wonder if we should; assent and unease intertwined are the defining reactions to neoliberalism’s imperatives. Educators need to be alert to affect here, and to what it may mean... 
Teacher professionalism requires “working with colleagues in collaborative cultures of help and support as a way of using shared expertise to solve the on-going problems of professional practice, rather than engaging in joint work as a motivational device to implement the external mandates of others” (Hargreaves & Goodson, 1996, p. 20). Visible Learning demands the latter, and not coincidentally, also requires the consumption of the artefacts, including the professional development sessions, the books, the websites, and the videos. These almanacs, with their tips for teachers, offer quick fix solutions without addressing the crises of education in an increasingly complex world (Slee, 2011). In the case of the Visible Learning brand, certain teachers become licenced as fans, for example in Hattie’s collected case studies of impact (2016). 
Hattie has effected the shift from intrinsic accountability to extrinsic accountability, negating teachers’ awareness that “professional knowledge is provisional, unfinalizable, culturally produced and historically situated” (Locke, 2015, p. 82). It cannot meaningfully be reduced to a list of strategies' (p12-13).
McKnight & Whitburn (2018) are also concerned about Hattie's portrayal in the TV series Revolution School as,
'the potential saviour of public education and redeemer of recalcitrant teachers' (p2-3).
They also question the financial conflict of interest of Visible Learning,
'Where are the flows of capital around Visible Learning? Where is capital and what kinds of capital are accruing for those producing “Visible Learning” as a brand? What material and financial benefits flow on to teachers and students?' (p6).
Professor Scott Eacott (2018), discusses Hattie's "complicity" in the expanding commercial arrangements he has acquired even though Hattie has denied the magnitude of such arrangements (p4).

Eacott then details the "substantial commercial arrangements" Hattie has with Corwin and ACEL (p5).

Professor Gene Glass with 20 other distinguished academics also concurs with John O'Neill in, 50 Myths and Lies That Threaten America's Public Schools: The Real Crisis in Education.
'The mythical failure of public education has been created and perpetuated in large part by political and economic interests that stand to gain from the destruction of the traditional system. There is an intentional misrepresentation of facts through a rapidly expanding variety of organizations and media that reach deep into the psyche of the nation's citizenry. These myths must be debunked. Our method of debunking these myths and lies is to argue against their logic, or to criticize the data supporting the myth, or to present more credible contradictory data' (p4).
Professor Pierre-Jérôme Bergeron in his voicEd interview also talks about Hattie's conflict of interest and Hattie's reluctance to address the details of his critics. Listen here - at 17min 46sec.

Hattie quotes Cohen (1985) 
'New and revolutionary ideas in teaching will tend to be resisted rather than welcomed with open arms, because every successful teacher has a vested intellectual, social, and even financial interest in maintaining the status quo' (p252).
Given Hattie's collaboration with Pearson and Corwin; who paid Hattie for the intellectual rights to Visual Laboratories and Hattie's financial interest in the solutions provided to schools. This is a remarkable double standard! 

It is disappointing that Hattie once again criticises the easy target - the teacher.

Dr Jonathan Becker, similarly critics Marzano, for his lack of independence, due to his financial arrangement with Promethean in his research.

Nick Rose goes into more detail regarding financial conflicts of interest and research.

Joshua Katz's YouTube presentation went viral regarding financial conflict of interest in Education.


Contradictions & Inconsistencies:

Hattie defines 'influence' as any effect on student achievement. But this is too vague and leads to many contradictions & inconsistencies. Hattie states (preface) 
'The book is not about classroom life, and does not speak of its nuances.'
However, his influences consist of a large number classroom nuances: behaviour, feedback, motivation, ability grouping, worked examples, problem-solving, micro teaching, teacher-student relationships, direct instruction, vocabulary programs, concept mapping, peer tutoring, play programs, time on task, simulations, calculators, computer-assisted instruction, etc.

Also in his preface he states, 

'It is not a book about what cannot be influenced in schools - thus critical discussions about class, poverty, resources in families, health in families, and nutrition are not included.'
Yet he has included these in his rankings:

Home environment, d = 0.57 rank 31
Socioeconomic status, d = 0.57 rank 32
Pre-term birth weight, d = 0.54 rank 38
Parental involvement, d = 0.51 rank 45
Drugs, d = 0.33 rank 81
Positive view of own ethnicity, d = 0.32 rank 82
Family structure, d = 0.17 rank 113
Diet, d = 0.12 rank 123
Welfare  policies, d = -0.12 rank 135

In Hattie's 2012 update of VL he contradicts his 2009 preface,
'I could have written a book about school leaders, about society influences, about policies – and all are worthwhile – but my most immediate attention is more related to teachers and students: the daily life of teachers in preparing, starting, conducting, and evaluating lessons, and the daily life of students involved in learning' (preface).
Blichfeldt (2011) also discusses these contradictions,
'Possible interaction effects between variables such as subjects, age, class, economy, family resources, health and nutrition are not part of the study, and the variables work anyway narrowly represented.'  
Hattie also promotes Bereiter’s model of learning, 
'Knowledge building includes thinking of alternatives, thinking of criticisms, proposing experimental tests, deriving one object from another, proposing a problem, proposing a solution, and criticising the solution' (VL p27). 
'There needs to be a major shift, therefore, from an over-reliance on surface information (the first world) and a misplaced assumption that the goal of education is deep understanding or development of thinking skills (the second world), towards a balance of surface and deep learning leading to students more successfully constructing defensible theories of knowing and reality (the third world)' (p28).
Prof Jérôme Proulx, Critical essay on the work of John Hattie for teaching mathematics: Entrance from the Mathematics Education, explains the contradiction,
'ironically, Hattie self-criticizes implicitly if we rely on his book beginning affirmations, then that it affirms the importance of the three types learning in education... 
So with this comment, Hattie discredits his own work on which it bases itself to decide on what represents the good ways to teach. Indeed, since the studies he has synthesized to draw his conclusions are not going in the sense of what he himself says represent a good teaching, how can he rely on it to draw conclusions about the teaching itself?'
Nilholm, Claes (2017) Is John Hattie in Blue Sword?
'Early in his book, he points out that the school has a broad assignment, but then he builds a model for teaching and learning that deals only with the knowledge mission (or rather on how knowledge performance can be improved)' (p3).
Teacher Training and Experienced Teachers: 

Hattie uses his own research on 65 teachers comparing NBC with Non-NBC teachers and reports this in the last chapter of VL. But, Hattie is using the research for a very different purpose, to demonstrate the difference between expert versus experienced teachers. Hattie makes the arbitrary judgment that NBC certified teachers are 'Experienced Experts' while Non-NBC teachers are 'Experienced'. He does not use student achievement but rather arbitrary criteria as displayed in the graph below.

Podgursky (2001) in his critique, describes them as 'nebulous standards'. Podgursky is also rather suspicious of Hattie's rationale for not using student achievement,

'It is not too much of an exaggeration to state that such measures have been cited as a cause of all of the nation’s considerable problems in educating our youth. . . . It is in their uses as measures of individual teacher effectiveness and quality that such measures are particularly inappropriate' (p2).
Hattie concludes that expert teachers (NBC) outperform Non-NBC teachers on almost every criterion (p260).



Harris and Sass (2009) report that the National Board for Professional Teaching Standards (NBPTS) who administer the NBC generate around $600 million in fees each year (p4). Harris and Sass's much larger study 'covering the universe of teachers and students in Florida for a four -year span' (p1) contradict Hattie's conclusion, 'we find relatively little support for NBC as a signal of teacher effectiveness' (p25).

It is interesting that much of Hattie's consulting work to schools involves measuring teachers on the arbitrary categories listed on the graph, a significant omission is Teacher Subject Knowledge.

Yet, using the same type of research, e.g. Hacke (2010), comparing NBC with Non-NBC teachers, he uses the low effect size to conclude that Teacher Education is a DISASTER. See Hattie's slides from his 2008 Nuthall lecture.



'Garbage in, Gospel out' Dr Gary Smith (2014)

What has often been missed is that Hattie prefaced his book with significant doubt 
'I must emphasise these are clearly speculative' (p4).
Yethis rankings have taken on 'gospel' status due to: the major promotion by politicians, administrators and principals (it's in their interest, e.g. class size), very little contesting by teachers (they don't have the time, or who is going to challenge the principal?) and limited access to scholarly critiques - see Gary Davies excellent blog on this.

Probability Statements?

The Common Language Effect Size (CLE) is a probability statistic which usually interprets the effect size. However, 3 peer reviews showed that Hattie calculated all CLE's incorrectly (he calculated probabilities that were negative and greater than 1!). As a result, he now claims the CLE statistic is not important and he focuses on an interpretation that an effect size d = 0.4 is the hinge point, claiming this is equivalent to a year’s progress. Although, there are significant problems with this interpretation.

Although recently Hattie seems to have another highly doubtful interpretation of probability. In an interview with Hanne Knudsen (2017) John Hattie: I’m a statistician, I’m not a theoretician Hattie states,
'The research studies in VL offer probability statements – there are higher probabilities of success when implementing the influences nearer the top than bottom of the chart' (p7).

'Materialists and madmen never have doubts' G. K. Chesterton

Interestingly, his reservation has changed to an authority and certainty that is at odds with the caution that ALL of the authors of his studies recommend, e.g., class size and ability group. Caution due to lack of quality studies, inability to control variables, major differences in how achievement is measured and the many confounding variables. Also, there is significant critique by more than 40 scholars who identify the many errors that Hattie makes; from major calculation errors and excessive inference to misrepresenting studies - see References.

PEDAGOGY:

Nepper Larsen, Steen (2014) Know thy impact – blind spots in John Hattie’s evidence credo,
'future expectations that Hattie evidence-based credos and meta-studies can solve all the problems in learning institutions forget that not very many decades ago the absolute buzz-words (at least in major parts of Denmark and Germany) were "experiential learning," "experiential pedagogy," "critical reform pedagogy," and "pedagogy of resistance." Why do we ‘forget’ to talk with dignity and curiosity about the teachers’ and students’ experience (i.e. Erfahrung in German, erfaring in Danish), meaning "elaborated experience," a differentiation one cannot make in English? What has become of the enlightened citizen? Why and how is evidence ‘imperialising’ the right to define without any attempt to inherit and renew the past’s educational vocabulary? 
We have come to live in a time without profound historical awareness in which we must acknowledge that Hattie, as the world’s most influential and successful educational thinker, does not contribute to the renewal of the pedagogical vocabulary' (p5).
Confounding Variables:

Wrigley (2018) details a good example of confounding variables,
'One of the lowest-rated categories in the Toolkit is ‘teaching assistants’. Different social contexts, age groups and pupil needs are merged, but the most significant source research was led by Peter Blatchford, who chose to speak back. His research, in fact, pointed to classroom assistants working in conditions where no time was given for guidance from the teacher or for evaluation afterwards. It complained of classroom assistants always being assigned to lower attainers, thus depriving these children of help from a qualified teacher. Blatchford was not suggesting that classroom assistants are ineffective, but pointing to ways in which they could bring greater benefit. We should also recognise that classroom assistants serve a range of purposes, not all of which are measured through attainment. 
Clearly, placing classroom assistants near the bottom of the Toolkit’s league table, with a label of ‘low impact for high cost’, could result in schools and academy chains terminating their employment, especially in times of budget cuts. Given these problems, it is only by chance if aggregation brings sound results. Whilst some conclusions may be tactically appealing, for example the low ratings for government-approved practices such as performance pay and streaming/setting, it can be extremely misleading. Admittedly the Toolkit’s authors urge caution: 
The evidence it contains is a supplement to rather than a substitute for professional judgement: it provides no guaranteed solutions or quick fixes. . . We think that average impact elsewhere will be useful to schools in making a good ‘bet’ on what might be valuable, or may strike a note of caution when trying out something which has not worked so well in the past. (Higgins et al., 2012)
However, many busy teachers and heads will inevitably take the league table at face value and remain unaware of its many problems' (p370).
What's the Alternative to these META-Meta-Analyses?

Wrigley goes on to discuss the alternatives, 
'In terms of providing guidance to practitioners and policymakers, we should note Pawson’s proposals for a ‘realist synthesis’ of research (2006: 78–94), which aims to develop convincing theories of causation. Rather than simplify the original research and turn out an average effect size, Pawson advocates an enriched understanding of the ‘subjects’ of each intervention, the causal theories proposed by the original researchers, the quality of outcomes and the adequacy of measures, processes and blockages. Chris Brown and colleagues examine diverse examples of interaction and participation involving researchers, teachers and policymakers (Brown, 2014, 2015). Similarly, Lingenfelter (2016: 118 ff) provides a useful summary of Bryk’s development of networked improvement communities and their collaborative development of knowledge, structures and action (Bryk et al., 2010)' (p372)...
'The problem comes from an inflated and generalised role for statistical studies, a lack of awareness and self-awareness, and the omissions and linearities that arise in order to create an aura of science, order and regularity. The attempt to make learning visible(as Hattie puts it) eclipses older understandings of education as Bildung and pedagogy (both words carrying the sense of human formation). It serves to make invisible the deep aims of education, in terms of what kind of human beings we are forming and what kind of future we hope for' (p374).

No comments:

Post a Comment