Strategic Plan, 2006–2011
The British universities, Oxford and Cambridge included, are under siege from a system of state control that is undermining the one thing upon which their worldwide reputation depends: the caliber of their scholarship. The theories and practices that are driving this assault are mostly American in origin, conceived in American business schools and management consulting firms. They are frequently embedded in intensive management systems that make use of information technology (IT) marketed by corporations such as IBM, Oracle, and SAP. They are then sold to clients such as the UK government and its bureaucracies, including the universities. This alliance between the public and private sector has become a threat to academic freedom in the UK, and a warning to the American academy about how its own freedoms can be threatened.
In the UK this system has been gathering strength for over twenty years, which helps explain why Oxford and Cambridge dons, and the British academy in general, have never taken a clear stand against it. Like much that is dysfunctional in contemporary Britain, the imposition of bureaucratic control on the academy goes back to the Thatcher era and its heroine. A memorable event in this melancholy history took place in Oxford on January 29, 1985, when the university’s Congregation, its governing parliament, denied Mrs. Thatcher an honorary Oxford degree by a vote of 738–319. It did so on the grounds that “Mrs. Thatcher’s Government has done deep and systematic damage to the whole public education system in Britain, from the provision for the youngest child up to the most advanced research programmes.”1
Mrs. Thatcher, however, disliked Oxford and the academy as much as they disliked her. She saw “state-funded intellectuals” as an interest group whose practices required scrutiny. She attacked the “cloister and common room” for denigrating the creators of wealth in Britain.2 But whereas the academy could pass motions against Mrs. Thatcher and deny her an honorary degree, she could deploy the power of the state against the academy, and she did. One of her first moves in that direction was to beef up an obscure government bureaucracy, the Audit Commission, to exercise tighter financial control over the universities.
From this bureaucratic acorn a proliferating structure of state control has sprung, extending its reach from the purely financial to include teaching and research, and shaping a generation of British academics who have known no other system. From the late 1980s onward the system has been fostered by both Conservative and Labour governments, reflecting a consensus among the political parties that, to provide value for the taxpayer, the academy must deliver its research “output” with a speed and reliability resembling that of the corporate world and also deliver research that will somehow be useful to the British public and private sectors, strengthening the latter’s performance in the global marketplace. Governments in Britain can act this way because all British universities but one—the University of Buckingham—depend heavily on the state for their funds for research, and so are in a poor position to insist on their right to determine their own research priorities.
Outside of the UK’s own business schools, not more than a handful of British academics know where the management systems that now so dominate their lives have come from, and how they have ended up in Oxford, Cambridge, London, Durham, and points beyond. The most influential of the systems began life at MIT and Harvard Business School in the late 1980s and early 1990s, moved east across the Atlantic by way of consulting firms such as McKinsey and Accenture, and reached British academic institutions during the 1990s and the 2000s through the UK government and its bureaucracies. Of all the management practices that have become central in US business schools and consulting firms in the past twenty years—among them are “Business Process Reengineering,” “Total Quality Management,” “Benchmarking,” and “Management by Objectives”—the one that has had the greatest impact on British academic life is among the most obscure, the “Balanced Scorecard” (BSC).
On the seventy-fifth anniversary of the Harvard Business Review in 1997, its editors judged the BSC to be among the most influential management concepts of the journal’s lifetime. The BSC is the joint brainchild of Robert Kaplan, an academic accountant at Harvard Business School, and the Boston consultant David Norton, with Kaplan the dominant partner. As befits Kaplan’s roots in accountancy, the methodologies of the Balanced Scorecard focus heavily on the setting up, targeting, and measurement of statistical Key Performance Indicators (KPIs). Kaplan and Norton’s central insight has been that with the IT revolution and the coming of networked computer systems, it is now possible to expand the number and variety of KPIs well beyond the traditional corporate concern with quarterly financial indicators such as gross revenues, net profits, and return on investment.
As explained by Kaplan and Norton in a series of articles that appeared in the Harvard Business Review between 1992 and 1996, KPIs of the Balanced Scorecard should concentrate on four fields of business activity: relations with customers, internal business process (for example, order entry and fulfillment), financial indicators such as profit and loss, and indicators of “innovation and learning.”3 It is this last that has yielded the blizzard of KPIs that has so blighted British academic life for the past twenty years. Writing in January 2010, the British biochemist John Allen of the University of London told of how “I have had to learn a new and strange vocabulary of ‘performance indicators,’ ‘metrics,’ ‘indicators of esteem,’ ‘units of assessment,’ ‘impact’ and ‘impact factors.’” One might also mention tallies of medals, honors, and awards bestowed (“indicators of esteem”); the value of research grants received; the number of graduate and postdoctoral students enrolled; and the volume and quality of “submitted units” of “research output.”4
An especially dysfunctional aspect of the British system, on display throughout its twenty-year existence, is that the particular KPIs that the British universities must strive to satisfy have varied at the whim of successive UK governments. John Allen’s reference to “impact factors” points to the final lurch in the Labour government’s thinking before it lost the recent elections. The Brown government particularly wished to promote research that would have an effect beyond the academy, above all in business. In the words of David Lammy, Gordon Brown’s minister of higher education:
Since these impacts are things that happen outside the academic realm…[we] propose that the panels assessing [research] impact will include a large proportion of the end-users of research—businesses, public services, policymakers and so on—rather than just academics commenting on each other’s work.5
Since the only major segment of the British economy that is both world-class and an intensive user of university research is the pharmaceutical industry, any UK government invitation to business “end-users” to take a more prominent part in the evaluation of academic research amounts to an invitation to the pharmaceutical industry to tighten its hold over scientific research in the UK.
This is an alarming prospect given the industry’s long record of abusing the integrity of research in the interests of the bottom line, well documented by Marcia Angell in these pages. The leading British pharmaceutical multinational, GlaxoSmithKlein, for example, features prominently in Angell’s research for its clandestine and improper payment to an academic psychiatrist in return for promoting the company’s drugs. For suppressing unfavorable research on its top-selling drug Paxil—to cite only one example—it agreed to settle charges of consumer fraud for a fine of $2.5 million.6
The new Conservative–Liberal coalition government that won the May election has endorsed the bureaucratic control of higher education by the central government, as did the conservative Thatcher and Major governments in the 1980s and 1990s. It is not yet clear whether the new government will adopt Brown’s “impact” KPIs, or come up with some new indicators of its own.
Whatever it does, this academic control regime with its KPIs will continue to apply as much to philosophy, ancient Greek, and Chinese history as it does to physics, chemistry, and academic medicine. The central government, usually the UK Treasury, decides the broad outlines of policy—the amount of money to be distributed to universities for research and the definition of “research excellence” that determines this allocation. The government has also set up a special state bureaucracy, situated between itself and the universities, that handles the detailed administration of the system. This bureaucracy, which continues under the new coalition, goes by the unappealing acronym HEFCE, or the Higher Education Funding Council for England.7
The intervention of the state in the management of academic research has created a bureaucracy of command and control that links the UK Treasury, at the top, all the way down to the scholars at the base—researchers working away in libraries, archives, and laboratories. In between are the bureaucracies of HEFCE, of the central university administrations, and of the divisions and departments of the universities themselves. The HEFCE control system has two pillars. The first is the “Research Assessment Exercise” (RAE), the academic review process that takes place every six or seven years when HEFCE passes judgment on the quality of the academic output of the UK universities during the previous planning period—and therefore on the funds eventually allotted to them. According to HEFCE’s rulebook for the RAE, the university departments must collect books, monographs, and articles in learned journals written by the department’s scholars.
For the assessment, four items of research output must be submitted to the RAE by every British academic selected by his or her university department. With 52,409 academics entered for the most recent RAE of 2008, over 200,000 items of scholarship reached HEFCE. For the previous RAE of 2001, this avalanche of academic work was so large it had to be stored in unused aircraft hangars located near HEFCE’s headquarters in Bristol.8 The items are then examined by the academics on panels set up by HEFCE to cover every discipline from dentistry to medieval history—sixty-seven in the 2008 RAE. Each panel is usually made up of between ten and twenty specialists, selected by members of their respective disciplines though subject at all times to HEFCE’s rules for the RAE. The panels must award each submitted work one of four grades, ranging from 4*, the top grade, for work whose “quality is world leading in terms of originality, significance and rigor,” to the humble 1*, “recognized nationally in terms of originality, significance, and rigour.”9
The anthropologist John Davis, former warden of All Souls College, Oxford, has written of exercises such as the RAE that their “rituals are shallow because they do not penetrate to the core.”10 I have yet to meet anyone who seriously believes that the RAE panels—underpaid, under pressure of time, and needing to sift through thousands of scholarly works—can possibly do justice to the tiny minority of work that really is “world leading in terms of originality, significance and rigour.” But to expect the panels to do this is to miss the point of the RAE. Its roots are in the corporate, not the academic, world. It is really a “quality control” exercise imposed on academics by politicians; and the RAE grades are simply the raw material for Key Performance Indicators, which politicians and bureaucrats can then manipulate in order to show that academics are (or are not) providing value for taxpayers’ money. The grades are at best measures of competence, not of excellence.
2 Brian Harrison, "Mrs. Thatcher and the Intellectuals," in Twentieth Century British History, Vol. 5, No. 2 (1994), pp. 206–245 ff., pp. 224, 234, 237. ↩
3 See particularly Kaplan and Norton, "The Balanced Scorecard: Measures that Drive Performance," Harvard Business Review, January–February 1992, and "Putting the Balanced Scorecard to Work," Harvard Business Review, September–October 1993. ↩
5 See Phil Baty, "Lammy Demands ‘Further and Faster' Progress Towards Economic Impact," Times Higher Education Supplement, September 10, 2009. ↩
6 See Marcia Angell, " Drug Companies & Doctors: A Story of Corruption," The New York Review, January 15, 2009, and Marcia Angell, The Truth About the Drug Companies: How They Deceive Us and What to Do About It (Random House, 2005). ↩
7 Scotland, Wales, and Northern Ireland have their own mini-HEFCEs. The government-ordered "Independent Review of Higher Education and Student Finance," chaired by the former CEO of BP, John Browne, recommended in October 2010 that HEFCE be amalgamated into a "Higher Education Council," or super HEFCE, comprising all four bureaucracies responsible for higher education in the UK. The Browne Committee recommended no change in the HEFCE/RAE control regime described here, for the simple reason that the committee is dominated by the kind of academic bureaucrats and corporate efficiency experts who have either been building the HEFCE system over the past twenty years are steeped in the management theories that have produced it. ↩
8 Political Quarterly, Vol. 74, No. 4 (October 2003). ↩
9 For definitions of all four gradings for the 2008 RAE see http://www.rae.ac.uk/aboutus/quality.asp. HEFCE has renamed the RAE scheduled for 2013 the "Research Excellence Framework," or REF, but I see no reason to go along with HEFCE's recourse to Orwellian newspeak and will continue here to refer to the procedure as the RAE, as it has been known throughout its twenty-year history. ↩
10 See John Davis, "Administering Creativity," Anthropology Today, Vol. 15, No. 2 (April 1999). ↩
Brian Harrison, “Mrs. Thatcher and the Intellectuals,” in Twentieth Century British History, Vol. 5, No. 2 (1994), pp. 206–245 ff., pp. 224, 234, 237. ↩
See particularly Kaplan and Norton, “The Balanced Scorecard: Measures that Drive Performance,” Harvard Business Review, January–February 1992, and “Putting the Balanced Scorecard to Work,” Harvard Business Review, September–October 1993. ↩
See Phil Baty, “Lammy Demands ‘Further and Faster’ Progress Towards Economic Impact,” Times Higher Education Supplement, September 10, 2009. ↩
See Marcia Angell, ” Drug Companies & Doctors: A Story of Corruption,” The New York Review, January 15, 2009, and Marcia Angell, The Truth About the Drug Companies: How They Deceive Us and What to Do About It (Random House, 2005). ↩
Scotland, Wales, and Northern Ireland have their own mini-HEFCEs. The government-ordered “Independent Review of Higher Education and Student Finance,” chaired by the former CEO of BP, John Browne, recommended in October 2010 that HEFCE be amalgamated into a “Higher Education Council,” or super HEFCE, comprising all four bureaucracies responsible for higher education in the UK. The Browne Committee recommended no change in the HEFCE/RAE control regime described here, for the simple reason that the committee is dominated by the kind of academic bureaucrats and corporate efficiency experts who have either been building the HEFCE system over the past twenty years are steeped in the management theories that have produced it. ↩
Political Quarterly, Vol. 74, No. 4 (October 2003). ↩
For definitions of all four gradings for the 2008 RAE see http://www.rae.ac.uk/aboutus/quality.asp. HEFCE has renamed the RAE scheduled for 2013 the “Research Excellence Framework,” or REF, but I see no reason to go along with HEFCE’s recourse to Orwellian newspeak and will continue here to refer to the procedure as the RAE, as it has been known throughout its twenty-year history. ↩
See John Davis, “Administering Creativity,” Anthropology Today, Vol. 15, No. 2 (April 1999). ↩