U-Multirank’s Response on the UK HE International Unit Policy Note

 

The UK HE International Unit published a policy note on May 15th, 2013, titled “Update on U-Multirank”. The IU lists a number of concerns discussed in the UK and comes to the conclusion: “Because of the concerns expressed above, the UK HE International Unit holds that U-Multirank may harm rather than benefit the sector. It seems likely that a number of ‘leading’ universities will not take part”.

With this short paper the U-Multirank team wants to provide further information and clarification, responding to all the concerns mentioned. As we think the concerns lack validity, we strongly reject the conclusion.

 

Concern 1: "Public funding lends legitimacy to U-Multirank and performance as judged by the tool could become the basis for future funding decisions.”

First of all public funding (by the European Commission) is being provided only in the start-up period to initiate the innovation of a new ranking approach; hence it will last up to four years and is provided on the clear condition that it evolves into an independent and self-funding initiative. The legitimacy of the whole system can only come from independence, not from public funding.

The European Commission has already made clear that it will not use the results of the ranking for any funding decisions - and also that it opposes the use of other higher education rankings for such purposes. The multi-dimensional character of U-Multirank makes it extremely difficult in any case to base funding decisions on it, unlike the existing global rankings, which – due to their simplicity - trigger a “reputation race”, frequently leading to costly national and institutional funding decisions in order to gain a few places in the rankings and so move up the scale of reputation. U-Multirank, on the other hand, will lead to a situation where differentiated ranking data will be used for strength-weakness analyses, instead of simplistic connections of performance and funding.

Concern 2: “The league table market place is already crowded, with each ranking deploying its own methodologies.”

This observation is true but is not a valid objection to U-Multirank. The league table market is crowded, but U-Multirank will not produce league tables. The current international rankings share an orthodox ranking approach (composite indicators, league tables, research focus) which is challenged by U-Multirank (and a few other national multi-dimensional rankings). This innovative approach to ranking will be more than “just one more ranking” – in particular with regard to the benefits it offers to universities to compare and benchmark and to mobile students supporting their choices. In its 2013 ranking report, the European University Association cited U-Multirank as as being “substantially different from existing global rankings” and we could not agree more.

Concern 3: “The tool partially relies on self-reported data which lacks verifiability.”

U-Multirank will use various data sources in order to provide a multi-perspective ranking. They include internationally available bibliometric and patent data base data, student survey data and data provided by universities. With the exception of the Shanghai Ranking all international rankings use self-reported data. In contrast to other global rankings U-Multirank will not use highly problematic reputation data which largely represents the success of international branding strategies of universities.

U-Multirank includes a number of “fact indicators” that will be based on self-reported data. Those indicators were rated as highly relevant by stakeholders, but there are no internationally comprehensive data bases covering those issues. We are clearly aware of the need for thorough verification procedures and have developed a number of statistical procedures and cross-checks to assure the quality of data.

U-Multirank highly welcomes the development of initiatives and data bases to fill the gaps in publicly available data (as e.g. EUMIDA on the European level); yet at the moment the only alternative to referring to self-reported data is to limit the scope of indicators substantially and so produce a result which is unrepresentative of the wide mission of higher education institutions as a whole and which is unlikely to be of any use outside of the elite research intensive universities.

Concern 4: “The tool risks being incapable of responding to rapidly changing circumstances in institutional profiles.”

First of all, unlike all other forms of ranking, the tool is able to make institutional profiles transparent, and this is one of the major purposes of U-Multirank. The tool will show the diversity of profiles of higher education institutions, will guarantee a comparison of like with like and will relate profiles to indicators which are appropriate for measuring them. In all empirical systems of course there is the issue that in case of changes there will be a time lag between the change taking place and being measured. But we should not then conclude that because performance might change we should not measure performance. Nor should we think that because performance could change it would be better to rely on academic reputation which is more stable, only real performance is relevant for the future of universities, not historical reputation reproduced in existing league tables. It is better to reveal profiles with some time lags than to hide profiles behind a biased and “non-transparent” notion of “world class excellence”. U-Multirank will make a change in regional orientation transparent, and even if there is a time lag this is better than the traditional rankings, which do not measure this at all. Multi-dimensonality makes rankings more sensitive to changes. And in case of a very substantial turnaround in a university profile (which doesn’t happen every day) it is still an option for the university to leave the U-Multirank system for a short period of time and to enter it again after new structures have been established.

Concerns 5 and 6: “The multi-dimensional aspect of the tool means that incommensurate variables will be combined to create a league table, undermining efforts to go beyond rankings.”
“U-Multirank risks becoming a blunt instrument that would not allow different strengths across an institution to be recognized.”

Those arguments are based on an unfortunate misunderstanding!

U-Multirank will not create league tables. This is one of the basic methodological principles which was already outlined in the feasibility study and report as well as in our bid for the implementation project.

However, the IU comment could be better directed against traditional rankings, which do indeed mix up very different aspects of performance into a single composite overall score. Instead U-Multirank will deliberately avoid this by presenting multi-dimensional profiles in which a university can perform well on some indicators and less well on others.

Our approach exactly allows showing “different strengths across an institution” - in two respects: on the one hand with regard to varying levels of performance in different areas within the university as a whole and within particular fields; on the other hand in comparing fields of a university.

Additionally, the presentation of U-Multirank in the web tool will provide guidance to the user. For instance with a special access to the data for students we will create an entry point focusing on teaching-related indicators first. In a set of “pre-defined rankings” we will present rankings for specific institutional profiles selecting the indicators closest to this profile. For instance there will be a pre-defined ranking of research-intensive universities using research oriented (mainly bibliometric) indicators.

Concern 7: “EU funds could be better spent on other EU priorities.”

One could always argue about EU priorities and policy funding priorities in general; beneficiaries as well as critics of agricultural subsidies would also do so. In contrast to many other policy fields transparency is still very limited in higher education. While we know the place of birth and biography of each cow that is grown in the European Union we hardly know the simple number of higher education institutions. It is a fact that transparency about the HE sector can provide a considerable value added; an assessment of the cost-benefit ratio will from our point of view lead to a positive assessment of U-Multirank (to be demonstrated after implementation).The EU spends over €500m every year on direct support for higher education, not to mention the EU support for research in universities or the billions spent through the European Structural Funds. The funding of U-Multirank represents 0.02% of the funding spent on higher education through the Lifelong Learning Programme - a programme which in the current funding period constitutes less than 2% of the total EU budget. Yet improving transparency about the HE sector can provide considerable added value, particularly in a field which is regarded to be a major source for innovation in the agenda of the European Union.

Concern 8: “LERU withdrew its support for the project in January 2013.”

This is not true; LERU couldn’t withdraw as LERU wasn’t in. LERU stopped the cooperation already in an early phase of the feasibility study in 2010. LERU was not part of later steps of the feasibility study and has neither been involved in the recent activities of refinement of methodologies and indicators, web tool development, etc. U-Multirank wishes to take up the dialogue with LERU again to engage jointly in the further development of the system. Three alliances of higher education institutions are formal project partners of U-Multirank: CESAER, IRUN and UASnet. Additionally, stakeholder organizations such as Business Europe and the European Students Union agreed on a formal partnership and support the methodological development of the system.

Concern 9: “A number of ‘leading’ universities will not take part.”

As we are heading for 500 participants in the first round (and already passed 400), including different kinds of institutional profiles (for instance also UAS), it was clear from the beginning that we would neither address all top research universities nor top research universities only (if this is meant by ‘leading’) with the intention to get their full set of data. Nevertheless, research intensive universities are well represented (or even overrepresented) in our current sample (at the moment around 15% of the U-Multirank participants are among the top 3% of research intensive universities in the world according to the Leiden ranking). Next to the interactive, user-driven part of U-Multirank we will also include a series of pre-defined rankings. In those rankings we pre-select universities with a specific profile and focus on a subset of performance indicators directly relevant for this profile. One of these pre-defined rankings will be the ranking of research-intensive universities based on around 10 research related, mainly bibliometric indicators. The vast majority of those indicators are taken from publicly available databases, i.e. they are for all universities. In the special ranking of research intensive universities all top research institutions worldwide will be included, even if they don’t deliver additional institutional data. Still this ranking will be made on a multi-dimensional and more differentiated basis than existing global rankings. So even this element of the system alone leads to substantial progress compared with existing league tables.

 

This project is funded by the European Commission. This publication content reflects the views only of the authors. The European Commission cannot be held responsible for any use which may be made of the information contained therein