The Problem With WAMs


The Weighted Average Mark discourages students from taking risks, and is inaccurate at worst. So why do we still give it so much faith?

By Toby Walmsley

We have all been assessed on our educational ability. Over the last 15 years, the marks I’ve achieved on tests have granted or denied me opportunities, and this pattern has only intensified as I’ve attended university.

At UNSW, the system for determining your academic standard is called the Weighted Average Mark (WAM). The system is remarkably simple: your WAM is the mean of your overall assessment marks for every subject, “weighted” by how many units of credit the courses are. It is used to determine whether a student is eligible to change their degree or access scholarships, and is an indicator to employers about the talent of a graduate. It is clearly important then that we have faith in the legitimacy of this score.

I do not have strong faith in our current system. Not only is the WAM open to a wide variety of contingencies that would allow for undue variance in achieved scores, but the way in which the WAM is calculated discourages students from taking risks, a key element of genuine learning.

The Assumptions Behind “Averaging”

The inherent problem with the WAM is that marks across a variety of subjects are synthesised into a single score. This requires some kind of equality of marks between subjects: a student equally talented and hard working in both microbiology and English literature should expect to get similar marks for both subjects. When we begin to draw comparisons between the marks obtained in different subjects, we necessarily need a way to quantify the difficulty of subjects, so that a low mark in a difficult subject can correspond to a high mark in an easy subject.

Determining the difficulty of a course and the talent of a student is often precarious. Schools and faculties will compare the marks obtained by students in a course from year to year and subject to subject to establish whether a course has a higher or lower than expected average. Often, administrators need to be careful with these averages: some courses will attract high quality students and others will encourage high quality work, in which case the averages will be higher than the norm. Sometimes, however, courses will not have a good reason to have a higher or lower than expected average, in which case the school or faculty will suggest to the convener that students be marked to a curve.

But “marking to the curve” is not a silver bullet. This method means that students who receive relatively higher than average marks than their peers will receive a relatively strong final mark, by ranking students and adjusting their marks accordingly. It can certainly be an incredibly useful tool, as it ensures that contingencies, such as a particularly difficult exam, will not distort an individual’s final marks. However, marking to a curve does come with its own set of problems.

When a course convener is making the choice about what grading curve they use, their only choice, ultimately, is to predict the mean mark and degree of variation, and whether the class will be skewed towards higher or lower marks. Often, the opaque mathematics of marking to a curve covers up the inherently subjective aspects of marking and designing examinations.

A stark example of this is a subject I took last year. The final exam was so difficult that 15 marks were automatically added to everyone’s score in response to the difficulty. How was the score of 15 determined? Intuition, I gather, of the quality of the class was key. No such calculations are explained on your final score, however.

This highlights that, despite any attempt to quantify difficulty or make marks for individual subjects “fair”, it’s impossible to properly quantify difficulty. Despite the best efforts of staff to ensure that the way in which courses are marked are checked and balanced against other courses in the schools and faculties, Dr Upton, Academic Programs Manager for the Faculty of Engineering said to me, “[the marking system is] certainly not perfect and it cannot be objective as it involves human judgements along the line.”

This is an honest way to view the WAM, but one that is perhaps not wholeheartedly adopted by the university, given that the WAM is used consistently as a clear measure of performance. What remains is that, if the scope for potential error in each subject is averaged over the length of an entire degree, two identical students in two different years could end up with vastly different WAMs. Why should this variance be tolerated, if your WAM is often used to access scholarships, program transfers, and job opportunities?

Academic “Strategy”

What makes the variations in marks between students worse is that many students understand the way in which the WAM works, and play it to their advantage.

If a student wants to maximise their WAM, then subjects have value only insofar as they can generate marks for the student. This translates to skipping unnecessary classes, ignoring non-assessable content, and participating as little as possible in classes. Knowledge ceases to be an end in itself, but a means to achieve higher marks. When marks are the key goal of subject, the subjects themselves become “games” to achieve these marks. This generates an awful cycle for course conveners and students: course conveners need to make more of the class assessable so that students are incentivised to learn it, but this reduces the quality and quantity of content that can be expressed.

This has resulted in, in my own experience, assessments like multiple choice exams for philosophy courses – a format that is clearly unsuited for the learning outcomes of the course, but highly suited to following the assessable content of the syllabus.

Within individual subjects, students are more likely to stick with comfortable opinions and arguments, instead of trying new approaches or studying new areas. Since the WAM encourages high assessment marks over other metrics, a strategic student would complete assessments with as little effort as possible, especially given that time effectiveness is a key part of completing all tasks in a semester.

When it comes to choosing subjects, a strategic student would choose classes that are well within their ability, as to ensure that they are at the top of each class they attempt. An easy way to do that is to ensure the class is below your level of expertise, or ensuring you are with classmates below your potential ability.

These three strategies are choices that I guarantee nearly every student has employed to some extent. Employing these strategies makes it more likely that you will get a good WAM, but also makes it less likely that you will learn deeply. Critical learning requires you to take risks. It takes more than cramming content. It means analysing and rejecting new approaches, learning critical skills, and being unintimidated by challenge.

What do we do now?

Despite this critique, I sympathise with the inherent problem the university is faced with: they aim to balance genuine teaching with meaningful ways to measure students’ abilities.

There are clearly forces within the university that are attempting to change the culture of marking for the better. Richard Buckland, from the School of Computer Science, has talked extensively about how to motivate learning without assessment. He found that by designing courses around alternative sources of motivation – such as competition, passion or challenges – course participation rose significantly. Of course, by removing the measurability of the course, it was hard to tell whether his new approach was successful. But, given that The Times University Ranking 2016 recently rated UNSW’s Computer Science program as the 51st best in the world, and the QS Subject Ranking 2016 ranked UNSW’s program as #35, it seems very likely that this new approach has had a positive impact on his students.

This attempt at encouraging experimentation through creative course design has the potential to catch on. Dr Upton, the Academic Programs Manager in the Engineering Faculty, explained that, although the faculty encouraged staff to design creative and challenging assessment tasks, “we are aware of the need to ensure students have sufficient technical knowledge to be able to apply creative approaches so we do still have a strong focus on examinations and so on.” Despite still focusing highly on technical assessments, he said there is much potential for change.

“This is changing though so watch this space.”

However, reducing dependency on marks to motivate students has not been the broad push within the university. Given the current push towards trimesters, and the potentially harmful way in which the university is intending to implement online content, there is also a clear drive towards courses needing to have easily quantifiable, although not necessarily challenging, assessments. And given how consistently the WAM is used to determine a student’s opportunities, this is only going to further encourage the negative learning practices I outlined above.

To give credit to the university administration, there seems to be an acknowledgment of the precariousness of using the WAM as a clear measure of academic performance. As the Deputy Vice Chancellor (Education) Merlin Crossly said to me, “Unflinching faith in anything is the exception rather than the rule. We are like umpires – because we make the tough calls, the game can proceed.” Although it is reassuring that the administration is aware that the WAM is a rough measure, this is not always reflected within its own (and external stakeholders’) policies. Certainly, the current importance placed on the WAM encourages students to act strategically for marks, rather than passionately for learning, far too much.


I believe that measuring progress and quality teaching are compatible. However, it is clear that UNSW currently has an imbalance in favour of measurement. Given the inherent inaccuracies with the WAM, and the way in which it encourages students to form a negative relationship with learning, it is clear that we ought to abandon our unflinching faith in a standardised mark to measure performance, and accept the WAM more as a generous estimation as opposed to a cut off. This requires that, as students, we reform our relationship with our marks, and ensure we are not compromising our education for strategy. But as educators, this to approach learning needs to be encouraged institutionally far more than it is currently. At the moment, the way in which we measure marks is causing educational quality to suffer. That is cause for concern.