Stay in the loop! Subscribe to our mailing list

U-Multirank: lacking data, defying uniformity

After six years and a EUR 2 million investment, the long-awaited EU-funded ranking tool U-Multirank was finally launched on 13 May in Brussels by Androulla Vassiliou, European commissioner for education, culture, multilingualism and youth. The “exciting new development in higher education” as she called it, is about making informed decisions about where to study, about meeting different needs and seeing more and better in higher education than just research-oriented universities

Imagined as a multidimensional tool, U-Multirank does not view research as the only criterion of a university’s quality, but looks into four additional parameters: quality of teaching and learning, international orientation, success in knowledge transfer (partnerships with business and start-ups) and regional involvement. Each of the criteria is scored from A (very good) to E (weak). With this in mind, it comes as no surprise that more than 300 universities included in U-Multirank have never before appeared in any global ranking. 

The overall statistics are the following: more than 850 participating higher education institutions, more than 1 000 faculties, 5 000 study programmes in 70 countries, and 60 000 respondents to a survey targeting students at the institutions involved in the ranking. Universities were invited to provide data and contribute to the database for the ranking tool and some 500 did so. The rest of the available data on universities were collected from existing online sources and from the Leiden list

Apart from the multidimensional approach, the tool is envisaged as user-driven, primarily by continual stakeholder involvement in the process of development and improvement of the system and also by offering users an opportunity for a personalised ranking based on their own needs and interests. On the dark side of the user-driven approach, however, are the many dots, numbers and graphs sometimes difficult to interpret. The complexity of the system needs getting used to and effort – or patience - to keep clicking one’s way until the tool’s many possibilities open up. 

Although U-Multirank potentially offers a comprehensive description of higher education institutions, its first edition is lacking depth. This is largely because universities did not provide enough data, which appears to be the consequence of many of them failing to keep record of relevant developments and actually lacking data themselves. On top of that, the visibility of the student survey was apparently too low, which affected the number of respondents. Given that not all the respondents answered to all the questions in the survey, the representativeness of the results can be questioned. One of the suggestions may be that the size of the sample be presented in the scoreboard in order to ensure transparency and indicate the degree of representativeness. 

Thus, challenges are not few in data collection and large part of the responsibility for the missing data can be attributed to universities. It is, nevertheless, a sign that better collaboration is needed on all sides if the tool is to offer diversity, transparency and a global perspective. On the other hand, if one of the core missions of U-Multirank was to challenge the idea of absolute rankings and of a one-dimensional perspective, it seems to be quite successful. The multitude of possible ranking scores relative to the choice of criteria does point out to the diversity and complexity pertaining to quality in higher education. 

After all, this is just the beginning and the goal is to gradually add more institutions and more data to the system. The data collection for next year will start in autumn. The 2014 ranking looks at 4 academic fields: business studies, electrical and mechanical engineering, and physics, while three more disciplines will be added in 2015: psychology, computer science and medicine.