High Impact Strategies

High Impact Teaching Strategies - HITs.

Gill Callister from the Department of Education and Training in Victoria, on the promotion of the 10 HITs,
"I wanted to embed across all areas of the Department a persistent sense of curiosity, an overwhelming desire to analyse, evaluate and seek continuous improvement. 
After all, if we're spending tax-payer dollars on policies, programs and initiatives that have the potential to make or break the future of young people in our state, the very least we can do is evaluate what's working ..." (Australian Teacher Magazine, Oct 2018, p. 22).
The HITs are mostly based on Hattie's book Visible Learning (2009)  – goal setting, feedback, worked examples, multiple exposures, explicit teaching, questioning, metacognitive strategies, structuring lessons, collaborative learning and differentiated teaching.

I'm not sure if the Education Dept architects of HITs have read the preface in Hattie's book,
"It is not a book about classroom life, and does not speak to the nuances and details of what happens within classrooms."
But, the Education Dept has made some attempt to clarify what each means, e.g., breaking feedback down into specific components. But these components are not consistent with the research that Hattie cites - see Feedback for more detail.

Also, no-one asked the 70,000+ teachers in the state their opinion -if you want to give your view take the survey - here.

Whilst the Ed Dept caveat there is still room for professional judgement, most schools have taken the easy path and mandated that  Professional Development now must align with these strategies. Lesson observation must, in effect ‘tick off’ these strategies which means that using other innovative strategies are not encouraged. 

It is ironic the Callister quotes Michael Barber on this issue, 
"...any decision requires more than the evidence. It requires judgement, analysis and ethics too".
A number of academics comment about the problem of this "evidence based" agenda dominating teacher professional experience/judgement and call for a reclaim in the value of these - Ladwig (2019), Qvortrup (2019), Eacott (2017) and McKnight & Whitburn (2018).

Also, the focus of these HITs will mean teachers will probably miss other important developments. E.g., in 2018, I was introduced to 'cognitive load theory'. Dylan Wiliam says, 'this is the single most important thing for teachers to know'.

Also, Hattie in an interview in 2019 is now saying the "jigsaw" method is the most effective teaching method. Yet this did not appear in his 2009 book or the 2012 update not the HITs!

Whilst I can see the Education Dept is trying to improve teaching statewide I don't think this will do that. Teaching is more complex and needs more thorough analysis and better research - see my personal anecdotes below.

There has been widespread criticism of similar programs in Denmark - Bjerre & Møller (2017),
"it is doubtful whether there is any evidence at all for the initiatives being taken...
...Hattie-based concepts and manuals indicate the path to learning. One should therefore discuss the use of Hattie critically and constructively in the efforts to develop the pedagogy and the school..."
Daniel Willingham in his 2012 book, "When Can You Trust the Experts?" recommends research-oriented teachers are best suited to scrutinise initiatives and the Teacher Union should be their organising body.

Professor Gunn Imsen (2011),
'Hattie's results cannot serve as instructions telling how teachers should teach. The results are synthesis of syntheses, they have lost all their nuances about context, age, student assumptions, culture and not least academic didactic dimensions. They are therefore difficult, not to say impossible, to generalize from. The results are no GPS you can navigate from location to ground level. 
The consequence of a literal use of Hattie's results will be the standardization of teaching methods. It is science that will give the cookbook recipe for teaching, and everyone should do it the same way ... This is not only unrealistic, it is also a bad attack on teachers' professionalism and a mistrust of their judgment.'


For the last 15 years, Hattie used 1 PhD study for Worked Examples (effect size of d = 0.57). Early 2018, he has added a 2nd study which reported a low effect size of d = 0.16. The combination of the 2 studies now gives a much lower average of d = 0.37 which is below Hattie's threshold of 0.40.

So Worked Examples is not a HITs anymore!

Also, the ranking of these strategies is based on dodgy and flawed research. Apart from all the problems of comparing effect sizes from different studies, there is a significant misrepresentation of studies – for example,

The goal setting is largely based on Hattie’s self-report grades – most of these studies do not measure that at all – see Dr. Kristen Dicerbo's analysis here.

The feedback effect is based on Hattie. But many academics have shown the types of feedback are not consistent across these studies nor the person who gives the feedback (teachers, peers, parents, the students themselves). Hattie assumes all these major differences are the same and jumbles all of the effect sizes together. 

Ruiz-Primo & Li (2013) in Examining formative feedback in the classroom context: New research perspectives, looked at most of Hattie's studies (9,000). They found only 238 were of suitable quality. They conclude,
'Clearly, the range of feedback definitions is wide. This leads to the question of how it is possible to identify patterns of results from such a small number of studies clustered around each definition. Or how is it possible to argue for feedback effects without considering the nuances and differences among the studies?' (p. 217).
'... much less is known and understood about formative feedback practices in the classrooms than we had expected' (p. 229).
David Didau in his excellent blog on feedback looks at the key studies used by Hattie and the EEF and confirms Wiliam's analysis and also shows feedback is complicated and nuanced.

Schulmeister & Loviscach (2014) Errors in John Hattie’s “Visible Learning”.
'Even where he has grouped meta-analyses correctly by their independent variables such as instructional interventions, Hattie has in many cases mixed apples and oranges concerning the dependent variables. In some groupings, however, both the independent and the dependent variables do not match easily. For instance, in the group “feedback”, a meta-analysis using music to reinforce behavior is grouped with other studies using instructional interventions that are intended to elicit effects on cognitive processes.'
'Many of the meta-analyses do not really match the same effect group (i.e., the influence) in which Hattie refers to them. For instance, in the group “feedback”, studies investigating the effect of student feedback on teachers are mixed with studies that examine the effect of teacher feedback on students.'
Even meta-cognition is not straightforward – see Greg Ashman’s analysis here.

In my experience, some of the most engaging, open-ended, motivating lessons I’ve used are from the Maths300 or Maths Task centre.

They involve interesting investigations which lead on to mathematical applications and theory like algebra and probability.

Yet according to Hattie the strategies that these activities use – problem-solving, inquiry, subject knowledge, student control, visual representations, and simulations are low impact strategies.

Then again there is a whole host of other aspects of schooling that are important, like behaviour management. Greg Ashman in his blog ‘Australia’s Secret Crisis’ argues that if there are not proper behaviour management plans in place then,
'none of these strategies are of any use unless teachers are supported by a robust whole-school behaviour policy with active leadership and graduated levels of intervention.'
With Evidence Like This Who Needs Your Opinion?

In spite of these significant errors, Hattie uses trite slogans like, 
'know thy impact' or
'statements without evidence are just opinions'.
This belittles teacher experience and opinion and raises his so-called evidence and rankings above them.

Hattie continued with this type of polemic,
'When teachers claim that they are having a positive effect on achievement or when a policy improves achievement, this is almost always a trivial claim: Virtually everything works. One only needs a pulse and we can improve achievement.' (VL p. 16).
Nick Rose and Susanna Eriksson-Lee in their excellent paper 'Putting evidence to work', quote a more provocative slogan from Kevan Collins, Chief Executive of the Education Endowment Foundation (EEF),
'if you're not using evidence to inform your decisions, you must be using prejudice.'
McKnight & Whitburn (2018) in Seven reasons to question the hegemony of Visible Learning warn these court fascism, 
'We argue that potent and colonising metaphors such as Visible Learning should be accompanied by an openness to critique and to negotiation, so that they do not become tools of a fascistic education (Pinar, 2011), or sticks with which to beat teachers' (p. 5).
'Classic democratic professionalism (Sachs, 1997, Locke, 2015) is about expertise, autonomy and altruism. Visible Learning undermines these concepts. Under the surveillance of Visible Learning, teachers are not experts unless they subscribe to Visible Learning; they are no longer autonomous, but must comply rather than form their own judgements, and implementation is more important than concern. Visible Learning is about commands from on high, not the collegial dialogue that has underpinned these tenets of teacher professionalism for decades. Yet in classic neoliberal fashion, it sells teachers’ knowledge back to them along with the illusion that this will make them ever more professional' (p. 13).
To resist the hegemony of Visible Learning they propose a number of principles,
'We resist corporate branding in our school. 
We value teachers’ professional knowledge above anything. 
We have a critical approach to educational research. 
We use Visible Learning selectively and strategically; our teachers are engaging in the ongoing and dialogic process of determining “what works” for them and their students. 
Context is more important for us than compliance. 
We consider our understandings of learning to be much more sophisticated than those of Visible Learning' (p. 21).
Larsen (2014) Know thy impact – blind spots in John Hattie’s evidence credo concurs,
'... the advantage of John Hattie’s evidence credo is that is so banal, mundane and trivial that even educational planners and economists can understand it' (p. 11).
In his interview with Hanne Knudsen (2017) John Hattie: I’m a statistician, I’m not a theoretician, Hattie seems to have retreated from this polemic,
'Evidence can also be related to experience – and the extensive experience of many teachers is legitimate evidence – to be contested, to be examined, and to be evaluated – in terms of the best impact on the learning lives of students. 
When there are differences between the evidence from the research and from experience, then there is a need for examination, for reflection, for seeking more avenues of evidence – and I want this to be via the effects on the students' (p. 7).
Yet in earlier in the same interview, he seems to contradict this,
'We hire people to deliver it and only one in five passes. That is because almost every teacher wants to get up and talk about their story, their anecdotes and their classrooms. We will not allow that, because as soon as you allow that, you legitimise every teacher in the room talking about their war stories, their views, their kids' (p. 3).
McKnight & Whitburn (2018) are concerned about this,
'Under the surveillance of Visible Learning, teachers are not experts unless they subscribe to Visible Learning; they are no longer autonomous, but must comply rather than form their own judgements, and implementation is more important than concern. Visible Learning is about commands from on high, not the collegial dialogue that has underpinned these tenets of teacher professionalism for decades. Yet in classic neoliberal fashion, it sells teachers’ knowledge back to them along with the illusion that this will make them ever more professional' (p. 13).
Then There is Teacher Passion:

In Hattie's 2012 update to VL he states, 

'Throughout Visible Learning, I constantly came across the importance of ‘passion’; as a measurement person, it bothered me that it was a difficult notion to measure – particularly when it was often so obvious' (preface).
Passion is not included in Hattie's list of influences, yet he raises it as one of the most important influences!

It was refreshing to read some peer reviews which address these issues-

Nielsen & Klitmøller (2017),
'Hattie's synthesis is problematic because it gives the impression that he has all there is to say about a particular educational phenomenon but, we try to show this is far from the case' (p. 10).
Blichfeldt (2011) referring to Visible Learning,
'Basically, evidence-based medicine was widely defined based on three foundations: research findings, clinical expertise and patient values and preferences. This broad understanding was embodied in professional definitions of the concept of evidence both within medicine (Sackett et al. 2000 ) and psychology ( Levant 2005)... 
This is an understanding of evidence that allows different methodological perspectives to and should complement each other.  
Such a dynamic and complementary methodology as the basis for evidence seems to be narrow in recent years... 
Estimated numbers from large surveys and samples must be tested and discussed against the professional experiences that are done locally.'
Professor Dylan Wiliam explains the problem in 'Inside the Black Box' (2001),
'Teachers will not take up attractive sounding ideas, albeit based on extensive research, if these are presented as general principles which leave entirely to them the task of translating them into everyday practice - their classroom lives are too busy and too fragile for this to be possible for all but an outstanding few. What they need is a variety of living examples of implementation, by teachers with whom they can identify and from whom they can both derive conviction and confidence that they can do better, and see concrete examples of what doing better means in practice' (p. 10).
Then commenting on research again, Wiliam says,
'despite the many and varied reports of successful innovations, they fail to give clear accounts on one or other of the important details, for example about the actual classroom methods used, or about the motivation and experience of the teachers, or about the nature of the tests used as measures of success, or about the outlooks and expectations of the pupils involved' (p. 12).
Nilholm, Claes (2017) Is John Hattie in Blue Sword?
'... he [Hattie] does not give reasonable answers to how his theses are to be translated into practical work and I have definitely not seen any study that critically examined what happens when municipalities and schools try to base their work on Hattie's work' (p3).

Hattie attacks the easy target - The Teacher:

Hattie summarises his book, 
'the devil in this story is not the negative, criminal, and incompetent teacher, but the average, let's get through the curricula… teacher' (p. 258).
This is an amazing critique and represents Hattie's focus throughout the book. He seems oblivious to systemic and political influences and seems all too eager to focus the blame on the easy target - the teacher.

Professor Gunn Imsen is also troubled by these comments,
'These are strong and sensational words.'
Yet Hattie says, 
'Educating is more than teaching people to think – it is also teaching people things that are worth learning' (p. 27).
This is the realm of politicians and senior bureaucrats, who mostly decide what is worth learning by designing and enforcing a curriculum. 

So if following the curricula is the issue, why not focus on those who decide the curricular? They are most often not the teachers!

In Hattie's jurisdiction, the state of Victoria, Australia; teachers can get dismissed for not teaching the state defined curricula - click here for examples.

Larsen (2014) Know thy impact – blind spots in John Hattie’s evidence credo.
'the evidence concept and evidence claims are attuned to the dominating and overall neoliberal paradigm, craving value for money and effective production of the future labour force ...  
Surprisingly, the teachers’ union, the powerful state planners and the servile educationalists can all use (and abuse) Hattie’s books and messages. At least in Denmark the teachers use Hattie’s concepts and arguments to state that they are the far most important and decisive learning factor (agent). At the very same time the educational planners and politicians state that is the teachers’ fault if the learning results are not what they were expected to be, i.e., if they are unable to fit the labour market demands or produce scores as high as in other countries (Commensurability and competitiveness have established themselves as a profane c-tandem god.)
Teachers get identified as the primary and indispensable learning factor and thereby as a public, expensive, and untrustworthy potential enemy. This amounts to scape goat projection par excellence... ' (p. 11).
Students' High Impact Strategies:

Azul Terronez surveyed 26,000 students and found great teachers:

1. build positive relationships.
2. are chilled.
3. are good listeners.
4. love to learn.
5. knows kids have a life outside of school.
6. notice if kids struggle.
7. sings!
8. are humble and take risks.

See his TEDx talk here.

Teacher's Opinion:

A study on collecting teacher opinion on strategies - here.

Personal Anecdotes:

I like most other teachers am always looking for ways to improve my teaching. I was heavily influenced by Ian Lowe in my University training. He showed us how to teach maths using experiment, real-life examples, simulation and investigation. Ian still runs a great program at the Maths Association of Victoria called 'Teach maths for understanding.'

Our School was forced to focus on Hattie's work back in 2010. We were one of the highest achieving government schools in the state, yet, Hattie's work dominated the discussions. The excellent and experienced teachers who over a period of 20 years raised the school to that level were never canvassed for their opinions.

The Maths faculty were accused of being 'old fashioned' as we mainly taught senior classes using worked examples. Back then 'worked examples' was a middle of the road strategy on Hattie's rankings (it still is!). 

Since the school was focussed on Hattie, our PD was tied to his influences, so we had to link our PD to his work.

I looked at Hattie's rankings and the top strategy was 'self-report grades' with a huge effect size of 1.44 while 'worked examples' was only 0.57.

I thought, this is pretty easy, so I tried this is my senior classes. It did not improve the kid's results!

I then went against Hattie's list and tried Ian Lowe's approach in my Year 11 and 12 classes. I created background videos explaining where each concept came from, e.g., Calculus. I then tried to get as many activities that involved those concepts and used those. Whilst the kids seemed engaged their results did not improve. I was puzzled as to why.

It is only recently that I think I've found an answer via Ollie Lovell's blogs. In particular, I read about his approach with his Y12 class. His used Sweller's theory of 'cognitive load' and evidence for 'retrieval practice'. He used pairs of worked examples with retrieval practice. 

When I read this I realised my mistake, I was creating too much of a cognitive load on the kids with my background information and activities. I should have kept it simpler.

But the problem with Ollie's approach is it is pretty boring for the kids. But with the focus on the Year 12 exam results, I guess it has to be this way?

I still use Ian Lowe's approach in junior years as the most important thing, in my opinion, is to engage the kids and give them some practical feel, experience and confidence in the concepts.

Regards what I think are the important strategies in a school. I will list them but the caveat is context, different contexts will require adjustments.

The kids have to bring to school some sort of motivation to learn themselves.

Their peer group has a major effect, so surrounding kids with a lot of other kids who have a positive outlook on learning is significant.

As mentioned by Greg Ashman above, a strong behavioural school policy that is consistently enforced.

As a maths teacher create activities that build confidence and be positive and encouraging.

Build respect by knowing your subject.

Practice, practice, practice at senior levels.

For maths use simulations, visualisations and activities at junior levels.

Give kids choices.

Many of these contradict Hattie's rankings, but I guess 'it's just my opinion'.

Mary Hudson's experience in a New York High School, Public Education’s Dirty Secret,
"Throughout Washington Irving there was an ethos of hostile resistance. Those who wanted to learn were prevented from doing so. Anyone who “cooperated with the system” was bullied. No homework was done. Students said they couldn’t do it because if textbooks were found in their backpacks, the offending students would be beaten up. This did not appear to be an idle threat. Too many students told their teachers the same thing. There were certainly precious few books being brought home... 
I tried everything imaginable to overcome student resistance. Nothing worked... 
Once I was even reprimanded for calmly taking my own cellphone from a girl who’d held on to it for half an hour, refusing all my requests to hand it back. The administration was consistently on the side of the student. The teacher was the fall guy, every time."
Victorian Education Dept promotion of HITs-

webinar on metacognitive strategies.


No comments:

Post a Comment