What works best

This page has now been revised (May 2010) in the light of John Hattie's recent apparently definitive work Visible Learning; a synthesis of over 800 meta-analyses relating to achievement (London; Routledge, 2009). The first thing to change has been the title, which used to be "What works and what doesn't". Hattie points out that in education most things work, more or less. The questions are around those which work best and therefore best repay the effort invested.

This site is mainly about your own individual practice as a teacher, and as such it tries to take into account your particular circumstances, such as the students you teach (assumed largely to be over school-age), your subject, your setting (school, college, university, work-based or informal adult education).

It recognises that it is difficult and even unreasonable to generalise, but we ought to set alongside this the results of very generalised research in the form of meta-analyses. Meta-analysis is more commonly found in medicine and epidemiology than in education, and it has its limitations, but it can also make very strong points.

It is simply the technique of searching for all the existing research reports on a particular issue, and combining them to get an overall result. A moment's thought�particularly if you know anything about research methods�will tell you that this is fraught with problems. Has the issue been defined in exactly the same way by all researchers? If not, how do you adjust the results? If the research is on the interaction between two variables (say, the use of ICT with disaffected learners), what category do you put it in? Do you rate the validity and reliability of the findings, or just assume that if it has been published, it must be right? And what about all the unpublished research which did not make it because it questioned the conventional wisdom of the day? And what about the research which produced negative results which no-one ever bothered to publish, despite the inestimable value of knowing what is not the case as much as knowing what is? And so on.

However, its proponents argue that many of these problems cancel each other out when you take a large enough research base, and that others can be mitigated by the choice of the meta-assessment tool.

The most prominent meta-meta-analyst in education is probably John Hattie, whose work draws on "a total of about 800 meta-analyses, which encompassed 52,637 studies, and provided 146,142 effect sizes [...] these studies are based on many millions of students" (Hattie, 2009; 15).  Note, however, that the evidence is collected across all phases of education (primary and secondary, and some post-compulsory) but dominated by children in the school sectors. Some of the issues, therefore, pose more questions than answers for those of us more interested in the post-16 sectors. In the discussion of low "developmental effects" below, for example, they are attributed simply to a child growing up over the course of a year. Clearly that accounts for far more change between the ages of 6 and 7 than between 16 and 17, or 26 and 27.

Hattie's common denominator

In common with standard meta-analysis practice, Hattie's bottom line is the "effect size". An effect size of "1" indicates that a particular approach to teaching or technique advanced the learning of the students in the study by one standard deviation above the mean. OK, that's rather technical:

    An effect-size of d=1.0 indicates an increase of one standard deviation... A one standard deviation increase is typically associated with advancing children's achievement by two to three years*, improving the rate of learning by 50%, or a correlation between some variable (e.g., amount of homework) and achievement of approximately r=0.50. When implementing a new program, an effect-size of 1.0 would mean that, on average, students receiving that treatment would exceed 84% of students not receiving that treatment.

    Cohen (1988) argued that an effect size of d=1.0 should be regarded as a large, blatantly obvious, and grossly preceptible difference [such as] the difference between a person at 5'3" (160 cm) and 6'0" (183 cm)�which would be a difference visible to the naked eye.

Hattie, 2009: 7-8 (my emphasis)

Notes: Effect-size is commonly expressed as d. Correlation is commonly expressed as r.
*  In 1999 (p.4), Hattie only claimed achievement was advanced by one year. I have no idea why he changed his mind.

So an effect size of "1" is very good indeed. And correspondingly rare; the chart below reports only about 75 individual studies reached that level. Hattie's more recent table of the later expanded number of meta-analyses themselves shows, by my counting, only 21 meta-analyses showing mean effect-sizes of over 1 (out of 800+) (See Hattie, 2009; fig. 2.2 p.16)

[Based on Hattie, 2003]

So that is what Hattie calls the "hinge point". He uses a "barometer" chart or gauge on which he can impose a needle in an appropriate position (diagram based on Hattie, 2009, fig.2.4 p.19 et passim)

Reverse effects are self-explanatory, and below 0.0

Developmental effects are 0.0 to 0.15, and the improvement a child may be expected to show in a year simply through growing up, without any schooling. (These levels are determined with reference to countries with little or no schooling.)

Teacher effects "Teachers typically can attain d=0.20 to d=0.40 growth per year�and this can be considered average" (p.17) ...but subject to a lot of variation.

Desired effects are those above d=0.40 which are attributable to the specific interventions or methods being researched.

Much less deserves less effort, and is marginal. On the other hand, sometimes simple interventions, such as advance organisers, pay off because although not terrifically effective, the return on a very small investment is substantial. "Problem-solving teaching" has an effect-size of 0.61 (2009 figures), and comes fairly naturally in most disciplines. But "Problem-based learning" overall had d=0.15, and developing it requires a very substantial investment in time and resources. So that's a non-starter, isn't it? Not necessarily; like a potent drug it needs to be correctly prescribed. For acquisition of basic knowledge it actually had a negative effect; but for consolidation and application and work at the level of principles and skills it could go up to d=0.75. Not much use in primary schools, but a different matter on professional courses at university (which is where it is generally found). (Hattie, 2009: 210-212)

Given that we can't do everything, where should we concentrate our efforts?

The majority of innovations and methods "work", according to the meta-analysis (bearing in mind the point above, that unless substantial funding and contractual obligations to publish are involved, most researchers are not inclined to publish negative findings).

But which work really well, and which have such a marginal effect that it is not worth the bother? That is the critical question. Here is part of the answer! (Follow links for comment; and please note that the comments are my spin on the topics, in relation to post-compulsory education, not Hattie's)

Influence

Effect Size

Source of Influence

Feedback

1.13

Teacher

Students' prior cognitive ability

1.04

Student

Instructional quality

1.00

Teacher

Direct instruction

.82

Teacher

Remediation/feedback

.65

Teacher

Students' disposition to learn

.61

Student

Class environment

.56

Teacher

Challenge of Goals

.52

Teacher

Peer tutoring

.50

Teacher

Mastery learning

.50

Teacher

Homework

.43

Teacher

Teacher Style

.42

Teacher

Questioning

.41

Teacher

Peer effects

.38

Peers

Advance organisers

.37

Teacher

Simulation & games

.34

Teacher

Computer-assisted instruction

.31

Teacher

Testing

.30

Teacher

Instructional media

.30

Teacher

Affective attributes of students

.24

Student

Physical attributes of students

.21

Student

Programmed instruction

.18

Teacher

Audio-visual aids

.16

Teacher

Individualisation

.14

Teacher

Finances/money

.12

School

Behavioural objectives

.12

Teacher

Team teaching

.06

Teacher

Physical attributes (e.g., class size)

-.05

School

The table above is edited from Hattie, 2003: note that I'm not using the 2009 figures because there is now so much information it is very difficult to digest in a form like this and the broad categories have now been sub-divided. Do read the primary source [insofar as a meta-analysis can be a primary source].

Feedback

(almost three times the average effect size)

...the most powerful single moderator that enhances achievement is feedback. The most simple prescription for improving education must be "dollops of feedback". The effect-sizes for reinforcement is 1.13, remediation and feedback .65, mastery learning (which is based on feedback) .50; more specifically, homework with feedback is much more effective than homework without feedback, and recent reviews point to the power of feedback as a discriminator between more and less effective uses of computers in classrooms. This does not mean using many tests and providing over-prescriptive directions, it means providing information how and why the child understands and misunderstands, and what directions the student must take to improve.

Hattie (1992:4) [my emphasis]

It can well be argued that the older students get, the more cognitive feedback matters. As the quotation above indicates, there is a reinforcement component to feedback which helps to shape behaviour, but cognitive feedback also has clear information in it, which helps the student to correct errors and polish performance.

Given that Hattie's meta-analyses are very general, it might be expected that this influence is greatest in skill-based learning, not only in the psycho-motor domain, but also in other convergent areas such as language learning and mathematics. While still important, its influence on learning in the humanities and creative subjects may be less direct.   

But! Hattie has revised and refined his view of feedback;

The mistake I was making was seeing feedback as something teachers provided to students�they typically did not, although they made claims that they did it all the time, and most of the feedback they did provide was social and behavioral. It was only when I discovered that feedback was most powerful when it is from the student to the teacher that I started to understand it better. When teachers seek, or at least are open to, feedback from students as to what students know, what they understand, where they make errors, when they have misconceptions, when they are not engaged�then teaching and learning can be synchronized and powerful. Feedback to teachers helps make learning visible.

Hattie, 2009; 173 [my emphasis]

In this blog-post I link to and discuss an interesting alternative view, applied to students undertaking complex learning.

[Back to table]

Students' prior cognitive ability

(Two and a half times the average effect size)

This is of course largely beyond our control. According to Hattie what students bring to their learning accounts for 50% of the variation of achievement; but even so, 30% of the variation is still down to teaching variables. On this topic, see "Intelligence" and "Student baggage".  

In 2009, Hattie reports (p.43) that "Students have reasonably accurate understanding of their levels of achievement" and their "Self-reported grades" had the no 1 ranked effect-size at d=1.44

[Back to table]

Instructional quality

(Two and a half times the average effect size)

So 30% of what makes a difference is in the hands of teachers. Hattie emphasises that teachers make a difference, and also goes on to point out the differences between expert teachers and merely experienced ones, based on empirical research, for once:

    We identified five major dimensions of excellent teachers. Expert teachers

    • can identify essential representations of their subject,
    • can guide learning through classroom interactions,
    • can monitor learning and provide feedback,
    • can attend to affective attributes, and
    • can influence student outcomes

Hattie, 2003:5

He goes into much more detail than it is possible to pursue here, but it is well worth reading. For my own take on the nature of expertise, which is I think reasonably consistent with Hattie and Jaeger's work, see this paper. 

[Back to table]

Direct instruction

(Twice the average effect size)

This is an influence with an effect-size of less than one, but still double the average; but it is also more controversial. "Direct instruction" is what we often term "teacher-centred" rather than "student-centred" teaching; it is traditional teaching rather than discovery learning, for example.

In a nutshell: The teacher decides the learning intentions and success criteria, makes them transparent to the students, demonstrates them by modeling, evaluates if they understand what they have been told by checking for understanding, and re-telling them what they have told by tying it all together with closure.

(Hattie, 2009: 206)

There is substantial evidence to support this finding in compulsory schooling, but perhaps rather less in the post-compulsory sector. This is an area in which considerations such as the nature of the assessments used to generate the initial data, and indeed their cultural settings, might make a considerable difference. It may indeed sit slightly uncomfortably with some of the observed features of the practice of expert teachers.

Going back to research in the '70s (old enough not to reference) the argument may be that indirect, student-centred, teaching is easy to do, but difficult to do well. We might also take into account that the meta-analysis does not discriminate between cultural settings, and in some of those (particularly in what Biggs calls "Confucian-heritage" countries) anything other than direct instruction is regarded as perverse.

[Back to table]

Remediation/feedback

(One and a half times the average effect size)

I don't know how the reported effect-size relates to the overall figure for "feedback" or quite how "remediation" is construed, but I imagine it refers to specific advice on how to improve performance on an assessment, sometimes nowadays known as "feed-forward". 

[Back to table]

Students' disposition to learn

(One and a half times the average effect size)

Also known as "motivation". But remember that the vast majority of students/pupils covered in the original research had little alternative but to attend school. "Post-compulsory" education is by definition not overtly compulsory, but that does not necessarily mean that all our students are enthusiastic about learning (in case you had not noticed). The different circumstances can result in different results.

Highly motivated students do better, if only because they are prepared to put time into learning, but whether the moderately and minimally motivated students do worse on a linear scale is unproven.  

In 2009, Hattie drew particular attention to the issue of removing demotivators (Herzberg's "hygiene factors"), and to the fit between motivation and students feeling in control of their learning experience, arguing that the current emphasis on testing and external accountability could undermine internal motivation (pp 47-49).

[Back to table]

Advance organisers

(Just below average effect size)

OK�I devoted a page to them (or at least part of one), and they are only averagely effective. On the other hand, they involve very little effort, and every little helps.

Hattie reports (2009: 167-168) that written advance organisers are less effective than non-written ones, and they work less well with lower ability and less knowledgeable learners. That of course makes sense, because you do need some previous experience and knowledge to hook the new material on to.

[Back to table]

Computer-assisted instruction

(Only three-quarters of average effect size)

I read into this, and others of Hattie's papers, that much of the data was gathered starting in 1987. Pre-Windows(� blah, blah) pre-internet; things are changing rapidly on this front, but I still have reservations, this site notwithstanding (lovely word! What does it mean?)  

Indeed, the earliest meta-analysis cited in 2009 goes back to 1977, a further 25 are pre-1990 and the next 25 pre-2000, but there are 114 meta-analyses in total (2009)

Hattie identifies the following conditions for effective use of technology;

The use of computers is more effective when...

  • there is a diversity of teaching strategies
  • there is teacher pre-training in their use as a teaching and learning tool
  • there are multiple opportunities for learning (e.g. deliberative practice, increasing time on task)
  • the student, not the teacher, is in "control" of learning
  • when peer learning is optimised
  • when feedback is optimised

(Hattie, 2009: 220-227, edited)

[Back to table]

Individualisation

(One-third of average effect size)

I told you so! The primary manifestation of individualisation is concern with "learning styles". The issue is being plugged all over the place, including by official bodies such as Ofsted and the Department for Education and Skills. As the Coffield et al (2004) study shows, most of the much-hyped and purported assessment schemes have neither validity nor reliability, and the effort required to take them into account is clearly disproportionate to the pay-off.

On the other hand, one of Hattie's characteristics of expert teachers is that they "have high respect for students" (Hattie, 2003:8). That, in my book, involves getting to know them, and getting a feel for how they learn, and responding to that, rather than to some mechanistic score on a dubious scale.  

[Back to table]

Behavioural objectives

(One-third of average effect size)

If anything performs less well than individualisation, it is an insistence on behavioural objectives. Again, the data refers mainly to school learning, so the picture may be different in the post-compulsory sector, particularly if the focus is on work-related training. There is nothing wrong�and a lot right�with being clear about what you are planning to teach, of course�it's an integral part of the "direct instruction" model mentioned above)�but a rigid insistence on merely behavioural objectives probably results in sterile, boring teaching, the effect of which outweighs any gain, particularly when you take into account the mental and epistemological contortions necessary to find behavioural correlates for an appreciation of "King Lear". (Shorn of the jargon; they are more hassle than they are worth.)

References

Hattie J (1992) "What Works in Special Education" Presentation to the Special Education Conference, May 1992 [On-line; Acrobat file] NZ Available: http://www.education.auckland.ac.nz/webdav/site/education/shared/hattie/docs/special-education.pdf Accessed 15 September 2011

Hattie J (2009) Visible Learning; a synthesis of over 800 meta-analyses relating to achievement London; Routledge. (Hattie and co-author Gregory Yates have now drawn the lessons of much of the material, as it were, in Visible Learning and the Science of How We Learn, Routledge 2013.)

For another major meta-analysis, see Marzano, R. J., Pickering, D. J., & Pollock, J. E. (2001). Classroom instruction that works: Research-based strategies for increasing student achievement. Alexandria, VA: Association for Supervision and Curriculum Development, and the study underpinning it; Marzano, R. J. (1998). A theory-based meta-analysis of research on instruction. Aurora, CO: Mid-continent Research for Education and Learning

Hattie's and Marzano's work forms the basis of Geoff Petty's highly recommended book Evidence-based Teaching; a practical approach Cheltenham; Nelson Thornes, 2006

A page of links for further information on meta-analysis in educational research from the University of Durham

All about effect size

Hattie's home page with further resources

Here is Hattie's inaugural lecture from which later stuff is drawn.

A substantial article on Hattie and his ideas from the Times Education Supplement (23 September 2012)

An excellent short critique of meta-analyses in general and Hattie's work in particular

Note that Hattie is working across all educational settings, all ages and all cultures.

[revised: 26.03.14]

To reference this page copy and paste the text below:

Atherton J S (2013) Learning and Teaching; [On-line: UK] retrieved from

Original material by James Atherton: last up-dated overall 10 February 2013

Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 Unported License.

Search Learningandteaching.info and associated sites:

Delicious Save this on Delicious        Click here to send to a friend     Print

This site is independent and self-funded, although the contribution of the Higher Education Academy to its development via the award of a National Teaching Fellowship, in 2004 has been greatly appreciated. The site does not accept advertising or sponsorship (apart from what I am lumbered with on the reports from the site Search facility above), and invitations/proposals/demands will be ignored, as will SEO spam. I am of course not responsible for the content of any external links; any endorsement is on the basis only of my quixotic judgement. Suggestions for new pages and corrections of errors or reasonable disagreements are of course always welcome. I am not on FaceBook or LinkedIn.

Back to top