Deryck Schreuder discusses rankings, the work they do and the importance of comparing apples with apples.
It was late winter in Toronto as representatives arrived from universities all over the world for the 2005 INQAAHE* Annual Conference. We soon found ourselves slipping around like unsteady penguins on icy Canadian sidewalks as we attempted the daily walk to the conference hall for global deliberations on ‘Quality Assurance’.
A huge program of papers and panels (and power-points) awaited our attention as we grappled with the consequences of UNESCO’s ‘academic revolution’ of expanding mass higher education systems from the 1980s.
In a welter of theory and didactic exhortations, a memorable session was offered by a Dutch colleague which included the pithy comment that ‘Rankings are becoming the new Quality Assurance…’
This almost throw-away remark caught our immediate attention. There was a ripple of nervous laughter.
But, who would now laugh?
The pre-eminent public measure
The global ‘Rankings’ in higher education have become the pre-eminent public measure of university performance and standing, while university administrations develop elaborate strategic plans to rise on the KPI staircase which leads to the nirvana of a top 100 global ranking.
There are certainly complaints about how these rankings are conducted, and there is increasing talk about the ‘unreality’ of rankings as a true measure of institutional quality. But in the end, universities themselves tend to be complicit in the rankings enterprise. Vice chancellors/Presidents criticise global rankings when disappointed at their ranked number; but they are then effusive in praise (and mercenary in usage) when their own campus is highly ranked (or even shows some improvement on the global ladder). Governing bodies, national and local governments, let alone the media (which likes pre-digested reports, with simple tables) all focus comfortably on rankings as an empirical guide as to how their own higher education sector is really ‘performing globally’.
Rankings are, of course, now the subject of scholarly analysis and critique – which has led some of the ranking agencies to evolving ever more sophisticated methodologies in the collection of their data. But, in the end, deeper issues remain beyond the metrics and the measures.
Rankings do not make sense in a plural system
Rankings are, above all, about the ‘Big End of Town’ – the corporatized institutional giants of the higher education ‘industry’. And the ranking bodies work not unlike the ‘ratings agencies’ of global economies and international corporations. With the commodification of education, they are concerned for stakeholder outputs and risk. Accordingly, for the established universities of the world, and the developing research intensive institutions, rankings make a certain sense. To start with, most are heavily reliant on peer reviewed publication data in top academic journals, not least in the Natural Sciences. They also favour Nobel Prizes; Gold Medals from Scholarly Academies; and International Awards. Rankings thrive on notions of esteem.
But rankings make much less sense in capturing the pluralistic nature of higher educational institutions. There is simply no one paradigm of excellence which can possibly allow for constructing a single orderings of rankings for the some 20,000 higher education institution which globally proclaim ‘university’ status. And if the best result of being ‘ranked’ is to be a comparative guide to institutional performance, then a much more nuanced (and useful) outcome could be achieved through targeted peer review; or indeed through an international ‘bench-marking’ exercise with appropriately similar third-tier institutions.
My own experience in higher education – 8 universities in 4 continents over 50 years (and ranging from ‘research intensives’ to ‘big urban regionals’, to ‘liberal arts’ and equity providers) experientially tells me there are many different mission environments for achieving educational excellence . The ‘Idea of the University’ is in fact many ‘Ideas’.
There is also the social reality that rankings are an unreliable proxy for the student experience and job prospects in the market. Recent Australian research found that graduates from a range of ‘unranked’ universities are just as employable; that ‘student satisfaction’ is broadly highest among some of the smaller, newer institutions; and indeed that highest salaries are not automatically associated with the globally ranked institutions.
Alternate modes of profiling university performance
‘A Different kind of College Ranking’ has, for example, been proposed by the Washington Monthly (October 2015) for 4 year degree institutions. Their criteria would begin with measures of upward mobility – enrolling students of modest means but intellectual promise; preparing undergraduates for advanced study and the creation of new technologies that will advance human knowledge and economic benefit. A ‘service connection’ within their own community would also be weighted.
With the current explosion of e-learning – often involving mature-age learners – new criteria are also surely now required to establish comparative graduate outcomes. MOOCs are on the move involving potentially millions of students; but equally important are the growing networks of university consortia offering multiple programs through a variety of modes of learning.
A globalised world of pluralist providers is now an educational reality. If Rankings are to reflect the higher education in change (and to the reality of aspirational social classes) they will need to work from this pluralist educational landscape. All will be called, and all can well be chosen – provided they are comparatively assessed within a sample of like institutions.
*International Network for Quality Assurance Agencies in Higher Education