The Great University Con

Home > Other > The Great University Con > Page 7
The Great University Con Page 7

by David Craig


  A 2013 Which University? survey supported this contention, suggesting that the average student workload appeared to be around 900 hours per year, 25% less than the 1,200 assumed by both universities and their regulatory body for academic standards, the Quality Assurance Agency. The Times Higher Education Supplement reported Richard Lloyd, executive director of Which University? asking whether this: “...raises questions over standards and whether students are being pushed hard enough.”100

  The same year, the Times Higher Education Supplement analysed data from the Higher Education Policy Institute/Which University? student hours survey. This revealed significant differences in student workload between types of universities, showing that students at research–intensive universities had higher average workloads than their counterparts at new universities.101 Does this mean that the quality of teaching is so much better at some universities that they only require part–time learning for a full–time course? Or does it mean that less is taught and less is learned? It is hard to reconcile significant differences in workloads with the idea of a consistent standard and similar costs for UK degrees across all of our universities.

  The only consistency that this research found in the workload of UK undergraduates was that they were on average significantly lower than those of their international peers:

  “(UK) Students typically receive an average of about 14 hours tuition a week and spend 12 hours in private study. This 26–hour workload compares unfavourably with European figures suggesting 41 hours in Portugal, 35 in France, 34 in Germany.”102

  In other words, UK–based students do considerably less work for their degrees than their European counterparts (Figure 1).

  Figure 1 - European undergraduate weekly work (hours) CHERI project 2007 103

  Perhaps UK–based students are more intelligent and/or UK academics more talented than their European counterparts? Alternatively, maybe UK degrees require less learning to pass and are therefore of a lower standard?

  Assessment

  “On an exam paper I answered less than half the questions, but still ended up with over 50% marks.”

  Engineering graduate

  The ability range of students has widened, the average quality has declined and their workload has diminished, yet the grades that they are achieving have improved dramatically during expansion. Somewhat miraculously, this trend has occurred across almost all UK universities in almost all subjects.

  Traditionally, universities provide two forms of assessment; formative and summative. Formative assessment (e.g. essays) relates to informal work that doesn’t count towards a final degree classification. Summative assessment (e.g. exams) on the other hand does count. Formative assessment normally occurs through regular essays and coursework which are marked by academics and returned with advice and guidance to prompt further study and independent learning. Oxford and Cambridge still require formative assessment, but very few other UK universities do. This type of assessment should lie at the heart of learning within Higher Education, acting as an ongoing dialogue between students and academics throughout a degree.

  Formative assessment is intensive, requiring a significant additional workload for academics and students alike. Classroom–based degrees at Oxbridge require students to produce (and academics to mark) an essay a week. Whilst this places more pressure on students, it also provides room for failure and experimentation and the opportunity to learn without worrying about grades. Professor Graham Gibbs describes it thus:

  “Students work hardest when there is a high volume of formative–only assessment and oral feedback – typically writing essays that don’t count towards their degree result, but for which they have to cover a range of material. This is the Oxford and Cambridge model and used to be the case at most universities 30 years ago.”104

  Failure and experimentation are incompatible with the factory–scale processes expansion has created within universities. The increasing number of students (who should be) attending seminars and lectures has spread university teaching resources ever more thinly, reducing time for marking and oral feedback. One of the easiest and least visible areas for universities to cut spending has been formative assessment. Because it isn’t measured, it is simple to dismiss and easy to underestimate and understate its value. Academics might appreciate the importance of formative assessment. But how can they be expected to argue for it when they are already being asked to do more work for more students with fewer resources? Most students can’t be expected to complain about reduced workloads either. As a result, formative assessment has mostly disappeared. Unfortunately, the weakest students at the weakest universities, who have the most to benefit from regular and consistent formative assessment, are probably the least likely to receive it.

  The situation with summative assessment, which students undertake in the form of exams and coursework, is not much better. Even if students receive feedback on their summative assessments, it is often confusing and unhelpful, giving them little indication as to how they can improve their work. In 2006, the National Student Survey found that 49% of students were unhappy with the quality of assessment feedback.105 Similar concerns were raised repeatedly in student submissions to the House of Commons Select Committee in 2009. This situation had not improved by 2013, when 78% of students in that year’s Student Academic Experience Survey failed to describe their assessment feedback as prompt. In the same survey less than 25% of students described academics as “putting a lot of effort into commenting on their work”. 106

  Whilst these findings are concerning, the real fault lies not with academics but with a university system which has failed to adapt or adequately resource its assessment processes post expansion. A three– (or, as is often the case now, a two–) hour exam is a rather primitive method of investigating how well a student has understood a subject, yet it remains the dominant model of assessment in universities. These exams were designed for bright academic students in relatively small groups.

  The problems this poses become evident when we consider the mechanics involved in a student completing and an academic marking a three–hour exam script. The average student might write four A4 pages an hour over a three–hour period, many will write substantially more. This means at least 12 pages per exam script. In the worst case scenario, the exam scripts are for a module with 400 or more students, making a total of a minimum of 4,800 pages of marking. The normal academic week is 37.5 hours. The average exam marking turnaround time is two weeks. This means 75 hours (4,500 minutes) to read and mark 400 scripts comprised of 4,800 pages. This leaves less than one minute per each page to read and mark an exam script, assuming the academic doesn’t take too many coffee or comfort breaks. Usually, the academic will put in a large amount of overtime during this period to mark the scripts. But even an extra 20 hours each week isn’t going to make that much difference to the limited amount of time spent on each page. Moreover, the academic’s other work will also continue during this period.

  The subjects being marked and the marking criteria are also complex, making it difficult to read at pace whilst providing fair and consistent marking. How do academics cope with fatigue, stress or even RSI? One possible solution was proposed by a graduate interviewed for this book:

  “.... within a week you’d have your mark back and some exams have three or four hundred students sitting them. I asked to see my papers and on more than half of them there wasn’t even a tick mark on them. I don’t believe that these lecturers had actually read some of the exam papers. I think that there was a lot of skim reading with no real structure for the marks.” Finance graduate

  Assessment is also probably easier when the majority (around 67%) of grades awarded nowadays are Firsts or Upper Seconds. In many cases, academics marking papers may just be asking: are the basic points present in the essay, is the writing comprehensible, do the answers contain references to reading? If so, then it’s probably a First or an Upper Second.<
br />
  Learning, teaching and assessment lie at the heart of Higher Education and expansion has seen a massive dilution in the quality of each of these areas.

  CHAPTER FIVE: STANDARDS: DUMBING DOWN

  UK universities confer over 420,000 undergraduate degrees each year. These are awarded within one of five categories: Firsts, Upper Seconds (2:1), Lower Seconds (2:2), Thirds and General or Unclassified degrees. Firsts are the best grades and Unclassified the worst.

  Many employers now specify a minimum requirement of a First or a 2:1 for potential applicants. As a result, most undergraduates worry constantly about whether or not they will achieve a 2:1. In a 2012 Times Higher Education Supplement survey, 72% of students listed achieving a 2:1 degree as among their top concerns.107 Statistically speaking, today’s students shouldn’t worry too much. In 2015, the Higher Education Statistics Agency recorded that 67% of graduates achieved either a First or a 2:1.108 So the majority of today’s students will get one of these top two grades. This has neatly inverted the classification pattern from 1970, when roughly 33% of students achieved one of these two top grades.109 Figure 1 shows the different proportion of degree classifications awarded in 2015 compared to 1994.

  Figure 1 - Degree awards in 2015 and 1994110

  The proportion of first class degrees awarded has tripled during this period from 7% to 21% and the proportion of degrees awarded at 2:1 or above has risen from 49% to 67%. For this to occur, whilst the average ability of students has declined and the average teaching hours per student have reduced, is a truly miraculous achievement and surely something to be celebrated. As one would expect, the grades at the bottom of the classification system have declined by a similar amount to the increase in top degrees. UK universities now award a lot fewer 2:2s than they did 30 years ago and slightly fewer third–class and unclassified degrees.

  This pattern can be seen across almost all universities and almost all subjects, but the rate of degree inflation is most marked in elite universities. In a submission to the House of Commons Select Committee in 2009, Professor Mainz Yorke stated that:

  “...the period 1994–2002 showed that the percentage of ‘good honours degrees’ […] tended to rise in almost all subject areas... the rises were most apparent in the elite ‘Russell Group’ universities.”111

  The trend towards better degree classifications is most pronounced at Oxford and Cambridge. Both universities now award around 25% of their students a first class degree. This has not always been the case, as the think tank Civitas pointed out in a 2005 report: “In 1960, Oxford awarded 8.5% Firsts and 33% Thirds. In 2002, the number of Firsts awarded was 23% and Thirds, 8.5%.”112

  There are two possible explanations for this apparently admirable academic achievement. The first is optimistic. It could be the result of brighter students, better prepared to learn and absorb knowledge by schools and universities using improved teaching methods and new learning technologies. The second, more pessimistic explanation is that these performance improvements are nothing of the sort. Instead, they are just another result of the Great Expansion at work – a savage devaluation that creates a gap between the nominal value of these grades and the actual level of learning and understanding that students have achieved.

  The evidence for the second explanation is convincing. It is not even that there are yearly variations around a general upward trend in degree classifications. Instead, as with A–levels and GCSEs, there is a year upon year incremental improvement, almost regardless of subject or university. Common sense might suggest that in some years the performances of staff or students would dip. Instead, we have near universal and constant improvement, an unlikely statistical phenomenon with all the authenticity of Soviet tractor production statistics. This raises the question as to who is responsible for the maintenance of degree standards.

  Universities are independent organisations with the right to confer their own academic qualifications. They do, however, have nominal checks and balances on this right. Firstly, university courses are reviewed by the Qualifications Assurance Agency (QAA), a regulatory body set up by the government to monitor the standard of teaching in universities. Secondly, universities also have external moderators on each credit–bearing course that they provide. These are subject experts from other universities who moderate and validate samples of students’ work and academics’ marks, putting their stamp of professional authority and expertise on the university’s grades.

  The reality is that both the QAA and the external moderator system are paper tigers. At the parliamentary inquiry (17 July 2008) the chairman of the House of Commons’ Select Committee on Universities condemned the QAA as ‘a toothless old dog’ and declared that the British degree classification system had ‘descended into farce’. Neither the QAA nor external moderators have much power to safeguard against falling standards and grade inflation. This arrangement is not accidental. It operates the way that universities, funding bodies and the government intended it to – quality assurance in name rather than function. If it means that 67% of students now receive Firsts or 2:1s, then 67% of students and their parents are unlikely to complain. This perception is likely to change, however, when these same students start receiving rejection letters from company after company because they are now competing against so many other students with their own Firsts and 2:1s.

  Complaints can certainly be heard from many graduate employers who find that our degree grades are not stable against time and certainly not consistent across universities. The Association of Graduate Recruiters has voiced concerns that the 2:1 has become devalued and is not trusted as a guide to a prospective candidate’s ability:

  “Companies are dropping their requirement for graduate recruits to have a 2:1 degree because they believe the grade is being handed out inconsistently and can no longer be relied on to represent a high level of achievement.”113

  The same point has been made by Professor Alan Smithers, from the University of Buckingham: “...employers no longer fully trust degree results, and tend to look back to A–level results as a more reliable indicator. A First is no longer a First.”114

  The issue of degree inflation was first raised fourteen years ago in Parliament by Lord Matthew Oakeshott, a Liberal Democrat peer, who described these figures as:

  “...giving a sense of ‘dumbing down’ in Britain’s universities. Students and prospective employers need assurance that degree standards are reliable and stable, not devalued currency.”115

  Perhaps the most compelling evidence of a problem can be found in the huge differences between the average marks required by different universities to achieve different grades. Traditionally, a student was required to achieve an average mark of 70% or above across all of their exams to achieve a First. This is no longer the case, as a Civitas report from 2005 outlined:

  “…there is a large disparity between institutions in the marks required to achieve a first class degree, ranging from 68.7% at the University of East Anglia, to just 50.8% at Sunderland University. Moreover, on average, it is the newer universities that require lower marks for a First.”116

  In 2008, Peter Williams, the head of the Quality Assurance Agency, the organisation meant to safeguard academic standards at universities, admitted to the Guardian that: “…the current degree classification system is arbitrary and unreliable. The way that degrees are classified is a rotten system…. It just doesn’t work anymore.”117

  In October 2009, a new Chief Executive was appointed and measures were put in place to strengthen the QAA’s reputation. But in 2012, the Science and Technology Committee of the House of Lords concluded that the QAA was still not fit for purpose. However, in 2016 the Minister for Universities, Jo Johnson, declared:

  “Our HE (Higher Education) system is internationally renowned........ Underpinning this reputation is our internationally recognised system of quality assurance and assessment, which we are updating to meet future needs
in an increasingly diverse HE system. The UK Quality Code is central to this quality system and has been for many years.”118

  That should hopefully reassure anyone who was concerned about possibly falling standards in our degree factory universities.

  Degree devaluation

  Grade inflation is just one aspect of declining standards within universities. In 2009, the House of Commons Select Committee responsible for Higher Education took this problem seriously enough to undertake its own investigation into university standards. The Select Committee and its chairman, Phil Willis, were highly critical both of the evidence provided by Higher Education leaders and of their attitudes towards the issues raised during the committee’s investigations. The report declared “... the system for safeguarding consistent national standards in England to be inadequate and in urgent need of replacement”. It also accused vice chancellors of “defensive complacency” in their reactions towards criticisms about falling university standards.119

  Interestingly, at the same time that the House of Commons undertook its investigation into university standards, the Higher Education Funding Council for England (HEFCE), the governing body of English universities, conducted its own research into teaching and learning quality. The HEFCE’s findings could broadly be summarised as “nothing to see here, move along please”. Phil Willis seemed unimpressed, declaring that this: “… proved the university sector’s arrogance and refusal to accept independent criticism. I find it enormously dissatisfactory that the agencies are utterly complacent about challenging standards.”120

  This was more than a minor disagreement. Phil Willis, a senior politician with an in–depth knowledge of UK Higher Education was scathing not so much about the mismatched contents of the two reports. Instead, he appeared irritated by the underlying attitude of the Higher Education sector when genuine concerns about quality were raised with them. This “defensive complacency” meant that university leaders made few attempts to engage with the issues raised by the evidence submitted to the Select Committee. Instead, the standard response from university leaders was that these were clearly isolated incidents which wouldn’t be found happening at their universities. The anger of vice chancellors was apparent at their annual conference in 2009. The head of Universities UK (the body representing vice chancellors) described the Select Committee as running “a sustained campaign of scepticism”. Michael Brown, vice chancellor of Liverpool John Moores University, argued for: “…a consistent approach to try to head off the Parliamentarians’ obsession, which is not based on great substance…. MPs have to be headed off at the pass before it gets too silly.”121

 

‹ Prev