Problem Based Learning

Effect Size d= 0.15  (Hattie's Rank=118)

At the 2005 ACER Conference - Hattie's Lecture here and Slides here, he called Problem Based Learning a disaster!

Then again in his 2008 Nuthall lecture.

It is ironic that Professor Gene Glass, who also invented the meta-analysis method, wrote a book contradicting Hattie, '50 Myths and Lies That Threaten America's Public Schools: The Real Crisis in Education'.

Gene Glass's Myth #50: Schools are wasting their time trying to teach problem-solving, creativity, and general thinking skills; they would be better off teaching the facts students need to succeed in school and later in life.

Professor Glass says,
'It should come as no surprise that when teachers focus on multiple ways of knowing and celebrate the wealth of knowledge their students bring to the classroom, collaborative environments spring up. In these environments, students and teachers participate in meaningful conversation and dialogue that remain a necessary component in teaching creativity and problem-solving. It is through conversation, not didactic instruction, that students are able to articulate what they know and how they know it, while incorporating the knowledge of their peers and their teacher to further their own understanding.'
Also, the high performing Finnish system seems to contradict Hattie's result here. The Director-General Pasi Sahlberg outlines in Finnish Lessons 2.0, a major reason Finland performs well in PISA rankings,
'both teacher education and mathematics curriculum in Finland have a strong focus on problem solving, thereby linking mathematics to the real world. Mathematics tasks on PISA tests are based on problem solving and using mathematics in new situations rather than showing mastery of curriculum and syllabi' (p77).
Dr Mandy Lupton analysed the research that Hattie used in detail here. 

Lupton concludes,
'As Hattie’s synthesis is aimed at informing the K-12 sector, it is hard to understand why Hattie included so many higher education studies in his synthesis. In particular, the medical PBL model is so distinct that it would be difficult to see how any K-12 teacher could draw conclusions from these studies for their own practice. 
The studies have different effect sizes for different contexts and different levels of schooling, thus averaging these into one metric is meaningless.
I was able to obtain all eight sources used by Hattie in his synthesis. The studies Hattie includes are almost all studies of medical education. Of the eight studies, five investigate higher education medical curriculum (i.e. training medical doctors) (Albanese & Mitchell, 1993; Dochy, Segers, Van den Bossche, & Gijbels, 2003; Gijbels, Dochy, Van den Bossche, & Segers, 2005; Vernon & Blake, 1993). Of the reminder one is in a higher education nursing context (Newman, 2004), and one is in a range of higher education disciplines (Walker & Leary, 2009).
Only one study (Haas, 2005) is in a school context (secondary school). Haas’ study examines a range of teaching methods for secondary algebra, where PBL is addressed along with other methods such as cooperative learning, communication and study skills, and direct instruction.'
Claes Nilholm (2013) in It's time to critically review John Hattie confirms Lupton's analysis,
'Hattie reports seven meta-analyses that together provide weak support for problem-based learning. Most of these are meta-analyses of studies of problem-based learning at university level. However, a meta-analysis of problem-based learning in mathematics (algebra) at a high school shows good effects. Thus, if you go to the average value that Hattie reports, you may end up with completely incorrect conclusions and miss that problem-based learning can be effective in the correct context. 
Hattie's major failure is to report summative measurements of meta-analysis without taking into account so-called moderating factors. Working methods can work better for a particular subject, a certain grade, some students and so on. Hattie believes that the significance of such moderating factors is less than one can think. I would argue that they are often very noticeable, as in the examples I reported. 
Unfortunately, it is my experience that compilations of research to be useful in school are often more confusing than informative. It is common practice to draw far-reaching (and sometimes wrong) conclusions from the research and to underestimate the difficulty of transferring research results to the teaching practice. Who is responsible for the consequences of that?' (p3).

But How is Achievement Measured?

Hanne Knudsen's interview with Hattie (John Hattie: I’m a statistician, I’m not a theoretician) on this is interesting, Hattie says,
'If you are doing surface learning it works quite differently than if you are doing deep learning. One example is problem-based learning, which comes out with a very low effect size. The reason for that is that problem-based learning only works for deep learning; it doesn’t work for surface learning. And 90% of the schools introduce problem-based learning for surface learning, so of course it doesn’t work. Learning means moving from surface to deep to transfer' (p6).
Proulx (2017), Critical essay on the work of John Hattie for teaching mathematics: Entrance from the Mathematics Education, identifies the inherent problem here,
'... ironically, Hattie self-criticizes implicitly if we rely on his book beginning affirmations, then that it affirms the importance of the three types learning in education:'
He quotes Hattie from VL,
'But the task of teaching and learning best comes together when we attend to all three levels: ideas, thinking, and constructing' (VL, p. 26).
'It is critical to note that the claim is not that surface knowledge is necessarily bad and that deep knowledge is essentially good. Instead, the claim is that it is important to have the right balance: you need surface to have deep; and you need to have surface and deep knowledge and understanding in a context or set of domain knowledge. The process of learning is a journey from ideas to understanding to constructing and onwards' (VL, p. 29).
From this quote, Proulx goes on to say,
'So with this comment, Hattie discredits his own work on which it bases itself to decide on what represents the good ways to teach. Indeed, since the studies he has synthesized to draw his conclusions are not going in the sense of what he himself says represent a good teaching, how can he rely on it to draw conclusions about the teaching itself?'
Thibault (2017) Is John Hattie's Visible Learning so visible? states that (translation to English),
'some comparisons seem to me hazardous: contributions by the pupil, by home, by the school, by the teacher, by the curriculum and teaching approaches. Indeed, it may seem strange to compare the effect of teaching strategies in problem-solving with the number of students per class or again with the number of hours of television listened to by students.'
Yelle et al (2016) What is visible from learning by problematization: a critical reading of John Hattie's work analyse problem-based learning in more detail,
'It is therefore necessary to define theoretically the main concepts under study and to ensure that precise and unambiguous criteria for inclusion and exclusion are established ... 
It should be remembered that metaanalyses aggregate multidisciplinary research, in addition to confounding methodologies of all kinds. We also want to come back to the confusion about the labels Hattie uses. The tripartite division of teaching and learning methods by problematization seems ill-justified, and therefore artificial. The interpretation of these indices therefore requires caution, but the reading that we have made allows us to favor a teaching-learning approach focused on solving problems to achieve our educational goals.
In education, if a researcher distinguishes, for example, project-based teaching, co-operative work and teamwork, while other researchers do not distinguish or delimit them otherwise, comparing these results will be difficult. It will also be difficult to locate and rigorously filter the results that must be included (or not included) in the meta-analysis. Finally, it will be impossible to know what the averages would be.
It should also be noted that Hattie is analyzing these techniques from a closed perspective. Thus, it deals with problem-based learning, for example, without taking into account the other methods used at the same time to support learning: the use of one method does not exclude the use of others in all circumstances. 
At the end of the day, it seems to us that Hattie's aggregated statistics can broaden perspectives and contribute to the synthesis of research in education, but do not invalidate approaches below the 0.40 indicator. 
Therefore, can we qualify Hattie's work of Holy Grail for education? Do these data offer easy, objective and absolute answers because they rely on numbers? 
The answer is no, of course.'
Bianca Hewes gives detailed reflections on Problem and Project Based Learning and Hattie's model here.

No comments:

Post a Comment