Pages

Monday, 9 May 2011

The State We're In (Part 2)

I’m acutely conscious that these observations on the state of British Higher Education, ‘as it appears to me’ from my not-very-exalted position, may represent no more than ‘stating the bleedin’ obvious’. Nonetheless I’m persevering partly to put my own thoughts in order and partly because the reflection involved even in stating the bleedin’ obvious is sometimes worthwhile. Part 1 suggested that the principal problem facing us has been the division of the HE sector into competing ‘cells’. With this situation in place, governments of both major political parties (neither has a spotless record in this regard) have, by appealing to local advantage, demonstrably been able to force through whatever cock-eyed policy initiatives they have wanted.

The general dynamic was set out in Part 1. As a practical case study, let’s take the obvious example of how it is, as I called it in Part 1, 'fundamentally antithetical to the furtherance of any real educational ideal' – Impact. There’s no need to waste words on what a profoundly stupid, half-baked idea this is (let’s remember, too, for the sake of balance, that it was a typical New Labour idea). Other people have done it far better than I could (Simon Blackburn, my favourite current British philosopher of the analytic tradition, waxes lyrical here and here). I was (and am) strongly opposed to the nature of this scheme; I’m not opposed to a requirement that publicly-funded research be made publicly available or accessible, but that is a quite different matter. When it was first proposed I circulated some reasons why I was opposed to it around my department. Such feedback I got was generally supportive, my colleagues being sound folk on the whole, but what disappointed me was a response from a colleague for whom I have enormous respect as a historian and as a human being, to the effect that ‘this would be good for us’.

This is absolutely symptomatic of the dynamic I set out in Part 1. It became perceived that we could improve our RAE/REF standing through the Impact Agenda. So, no matter what the general principles of the business might have been, no matter whether a general front for the good of the discipline might have been desirable, if it worked for us we’d support it, and damn any other history departments that didn’t fit the bill as well. And even individuals of profound intelligence, decency and humanity got ensnared in this way of thinking. At a lower level, damn your colleagues who don’t work on British history, for whom Impact projects are that much more difficult to devise (the cells have cells within ‘em). It only takes a certain number of cells to think like this for the measure to get implemented, just as it only takes a certain number of individuals within each cell to persuade the cell to support it. There comes a tipping point. At the institutional level it comes where the institution decides that something ‘won’t go away’ (Classic Gutless Staff-Meeting Pseudo-Arguments no.94) and had better implement some initiatives to make sure that next time, at least, it does OK in this category. On an individual basis it comes in what I call, in my vernacular, the ‘What the f*ck’ moment: the moment where the individual thinks ‘Ah, what the f*ck; I suppose I’d better see what I can do for my CV’ … Because once the policy is implemented it will quickly appear among the criteria for promotion etc. Indeed institutions start appointing whole offices of new administrative staff to oversee and implement Impact projects …

The problem went further, though, than the splintering of any general front presented by History. At the next level up, History perceived, on balance, that it could live with Impact, so damn the rest of the Humanities. Things may have changed. The open letter published in the Higher about the AHRC and The Big Society was signed by 26 learned societies. But there was no sign of this with Impact. The RHS decided that 15% of the REF was OK ‘because this was no time for the Humanities to suggest they had no wider relevance’. No matter that no consultation between the RHS and the other learned bodies of the humanities seems to have taken place. No matter that the disciplines most widely held to be ‘relevant to wider society’, the hard sciences, spelled out very vocally indeed their opposition to Impact. No matter that our sister disciplines in the modern languages, philosophy, literature and so on would all find it extremely difficult to work with the Impact agenda. This, too, was very disappointing. The whole business graphically illustrates how the possibility of any sort of unified front is shattered by the way that the sector is divided along all sorts of axes into all sorts of contingently-existing cells all now perceiving themselves as in competition with each other. If the AHRC refuses to budge of the issue of The Big Society within its funding priorities, it will not be long before we hear that, because ‘it won’t go away’, there are various things ‘that we already do’ that ‘can easily be put under that heading’. Mark my words. You heard it here first. Those 26 learned societies will start thinking about the possibilities for their own cell. Already, as this item reports, various VCs, vice-masters, etc, are queuing up to talk about the positives of The Big Society and how it includes what universities already do. This is how it starts, working to the Führer…

Obviously I don’t know for sure, but I can’t help feeling that, thirty years ago, this situation would have caused academics to look somewhat askance. Indeed, in David Lodge’s novels written at that sort of time, initiatives like this appear as obviously satirical exaggerations. How did we get into this mess? The problem surely came with the introduction of the Thatcherite mantras of choice and competition. The clever move here was that choice and competition came alongside auditing and transparency. Now, there is surely no sustainable argument against the viewpoint that recipients of public money should have to prove that they are doing what they are supposed to be doing with that money, that they are doing their jobs to a level and consistency that justifies their receipt of public money, and that therefore public money is not being wasted. That seems unobjectionable to me. I’ve said it before and it has not made me popular, but I’ll say it again: no one deserves a publicly-funded salary just for being clever (let alone, as sometimes still seems to be the case, for having been thought to have been clever when they were 25). If you want to draw a professorial salary but don’t want to (or can’t) do what the job – as currently set up – requires of you, whether you like it or (like most us) not, then the solution is to find a private benefactor.

The trick, though, was then to put the results of these fundamentally unobjectionable audits, especially of research, into numerical form and thence into league tables. This was where the mantras of choice and competition reared their ugly head. Competition is supposed to be good for quality, and transparency about the results good for the sacred cow of ‘choice’. Neither element of the equation stands up to very close scrutiny. Here is Stephen Fry pouring scorn on the idea that having a greater range of choice necessarily is a good thing in terms of quality. That was about 20 years ago, but this neo-liberal idea has got no less ludicrous or - alas - current in the interim.  Little more needs to be said really. Which is best: a choice of a million kinds of crap or a choice of five quality products…? Whether or not choice, competition and the league tables held essential to the maintenance of the first two actually raise quality seems to me to be very much a moot point. What the process seems to me – empirically – to do is simply to generate the production of the right sorts of thing that yield the right sorts of numerical data to improve one’s score.

Anyone familiar with the history of GCSE/A-levels over the last 25 years will know how this works. In an effort, allegedly, to raise standards by introducing choice and competition, school league tables of GCSE/A-level performance, school-by-school, were produced. To improve their scores, schools demanded transparency (rightly) over the marking of the exams. Eventually this became so algorithmic that teachers could drill their students about how to get the best marks by the simple production of formulae. Numerical data can then be produced, marks obtained, etc, etc., and league tables published. But as anyone who has had to teach the sorry products of this system will attest, all this transparency, competition, choice and league tables has produced anything but raised quality.(1)

The RAE/REF has gone much the same way. For most of the process, departments and institutions spend their whole time trying to get clues about what counts and for how much, and how they can best maximise their scores. Many years ago, David Cannadine (or ‘Sir’ David Cannadine, as he likes to be known) said in his inaugural lecture at London that the process mistook productivity for creativity and it’s – fundamentally – still the case. Sometimes it seems every bit as algorithmic in its procedures as the A-levels (according to what I have heard from actual RAE panel members). If it’s not, it’s the pretty arbitrary (or at best subjective) awarding of a score from 1 to 4 by someone whose own research one might or might not personally have any regard for. The only way to stop it being arbitrary/subjective would be to make it entirely algorithmic, something that may well come if people demand transparency about the scores and procedures, thus far withheld from us. It’s also moved the whole issue away from the actual production of quality scholarship and into not entirely related areas, notably, of course, the sphere of getting money (grants). A million-pound grant to publish a list of everyone called Bert in seventeenth-century Rutland? Brilliant! A ground-breaking monograph on a major issue of European history? Meh - well, it’s not A MILLION POUNDS, now is it?

Whilst we’re on the subject of the RAE/REF, it is worth drawing attention to the fact that this now utterly pointless exercise is still ploughing ahead. I say utterly pointless because there’s no longer any real reward for participation by the Arts and Humanities subjects. Their HEFCE grant – the distribution of which was the avowed aim of the RAE/REF – has been cut by 100% (if I am wrong about this, please let me know: this is the most recent piece I could find, and it doesn’t reassure me; I think I read that it might have been a 'mere' 80%). So why are we still bothering? I’ll tell you why. It’s because our universities actually want us to go through these hoops just for the sake of the precious (meaningless) league tables in which they want to do better vis-à-vis everyone else, and probably because individual panel members think it will help them and their careers – vis-à-vis everyone else. If the Arts and Humanities panels, from their chairs down, had anything about them, they’d resign en masse and screw the whole sorry business. But we can’t expect that.  Here we go again, working to the Führer …(2)  You have to admit that getting the cells to wield the stick themselves even after the carrot has been removed represents an astonishing achievement in the comparative history of the state.  It also shows how The Big State can loom very effectively behind a facade of 'small government' (as in the early Roman Empire, I suppose).

So, all the RAE/REF league tables do is generate specific forms of numbers, with little or no actual relation to quality. But if the RAE/REF is bad, then may I introduce you to the National Student Survey? Here is the absolute ultimate in the generation of meaningless numerical data that can be analysed pseudo-statistically and arranged in league tables. For non-UK readers, the aim of the game is to get the students to rate their student experience over a series of headings. An average is then taken and - hey presto! – you end up Nth out of 196 (or whatever) in the league table. The institutions then go out of their way to try and find ways of generating better figures. But the whole exercise is a sham. It is based upon some serious category errors. Number one: it assumes that our students are our ‘consumers’, which – let’s be crystal clear about this – they are not. A friend of mine is fond of saying that the process is like trying to assess the quality of local bakers by asking for feedback from their cakes. The ‘consumer’ of our ‘products’ is society in general – employers, etc. Second category error: it assumes that the student is in a position to judge the quality of the product. The NSS provides none of the ‘grade descriptors’ we academics have to work with. The students have no criteria according to which they can judge their libraries – have they used every university library, or even a reasonable sample? And (at least in newer universities like mine) the poor old libraries always come in for a kicking in the NSS, presumably because they don’t have every book easily available at any time. After my own department didn't do too well in the first NSS it became clear in consultation with the student body, that we probably scored badly on ‘feedback’ because our students – bless ‘em – didn’t (for example) actually even realise that comments on procedural essays counted as feedback. All this leaves aside the fact that good institutions can get penalised for having good, critical students. Many of ours have friends at Oxford and Cambridge and so, because they see that their libraries are many, many times better than our (by comparison with other 1960s foundations) actually rather good library, they rate it as (say) 3 out of 5.

This is only the pinnacle of the problem of feedback, which we are now forced to spend so much time dealing with. Now, feedback can be useful. Let me make that clear. If my students tell me I talk too much in seminars, or I speak too fast in lectures, or I have too many PowerPoint slides with too much info on them, then that is useful for me and I can try to do something about it. If, on the other hand, little Johnny Frithfroth-Smythe (Year 1) tells me I should have had less social history and more politics in my lectures, or adopted a more thematic approach, my immediate response is – pretty much – f*ck you, you arrogant little turd! What does a first-/second-/third-year undergraduate student know about how to organise a course? Now it might well be that students just don't have the vocabulary to express what they mean to say, and that what little Johnny meant was 'I am, myself, more interested in political history and I was disappointed that there wasn't more'.  Fair enough, but too bad.  It's still for me, not them, to decide, and their lack of a useful vocabulary only underlines the problem with feedback-driven HE.(3)  What we are trying to produce – after three years – is someone who just might have some understanding of history and how it works; who might be ready to go on to be trained about how to do research and design a history course. If they already knew this in year 1 there’d be no point in them doing the degree, now would there?

This dimension of the ‘customer’ image is a third category error. If I go into my local Curry’s and buy a DVD-player, then I have some (admittedly vague) idea of what a DVD player should do/be expected to do, and if it doesn’t do it, or do it to what I think is a sufficient level compared with what I had to pay, I can go back and complain. To pursue the (rather forced) analogy, what we are trying to do is produce someone who, after three years, might have enough of an inkling about the subject that they can go on to be trained in what a DVD does or can be expected to do. Wringing our hands about what students think of a first-year course’s content or structure or methodology is akin to the Curry’s staff wringing their hands and being all apologetic when I bring my DVD-player back and complain volubly that it won’t make toast - rather than showing me the door in short order, which would be the sane 'business world' response to this level of 'customer feedback'.(4)

The fact that institutions expect us to wring our hands in precisely this way is what leads to all the expectations laid upon the eventual NSS league table – a student is in no position to judge his/her degree, given they have nothing to compare it with – and the students’ expectations that they, the customers, are always right. But the league table and its absolutely meaningless data leads to the employment of central admin offices and to people having to waste time ensuring the ‘enhancement of the student experience’. I have a friend who has been saddled with this in his department. I can only assume that he is paying some sort of enormous karmic debt.

So what do we have at the end of the day? Universities so obsessed with league tables, RAE, NSS, and combined tables (of equally meaningless numbers), like the various ‘quality’ newspapers’ ‘University of the Year’ tables, that they set up offices to manage these figures and get the academic staff to spend their time on ensuring higher scores (I can, for example, think of one university in the north of England that has recently decided that its history department comes low in the number of grants applied for, regardless of whether or not this might actually help them produce good history of a sort they are interested in).

What this in turn appears to have led to is the idea that institutions are like businesses in competition with each other and that 'business model' has produced the most regrettable fissure of all within the HE sector, the confrontation between ‘staff’ and ‘managers’, which is what I want to state the bleedin’ obvious about next time.

Notes:
1. The situation, by the way, is far worse in disciplines other than history; I know that modern language departments can habitually fail a dozen or so of a first-year intake that had to achieve an A or B A-level grade to get on the course. I have heard similar stories from science departments about the gulf between A-level and the standard necessary for undergraduate first-year success. We complain that history students have little or no idea about what is required of them when they arrive at university but failing five percent of an intake with As or Bs at A-level is a pretty unlikely outcome.

2.  If you think all this is just sour grapes on my part, here is my RAE 2008 submission (you'll probably have to select me from the drop-down dialogue box), which is probably as good as anyone's in the UK.

3.  Again, to evade accusations of 'sour grapes', I should say that I do very well in feedback responses, with results in the general 'how well did you think your tutor did' area invariably in the 90-100% bracket.  I don't have any on-line resource to point you at but there is a Facebook 'appreciation society' if you can really be bothered to look.

4. Aside – a colleague recently had student feedback suggesting that the same seminar should have been run at two different times each week so that students could go to the one that was most convenient to them that week.  I kid you not.  That, it seems to me, is not an issue of the student not having the right vocabulary to hand but a graphic index of the 'consumer mentality'.